text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Circadian Disruption Accelerates Tumor Growth and Angio/Stromagenesis through a Wnt Signaling Pathway
Epidemiologic studies show a high incidence of cancer in shift workers, suggesting a possible relationship between circadian rhythms and tumorigenesis. However, the precise molecular mechanism played by circadian rhythms in tumor progression is not known. To identify the possible mechanisms underlying tumor progression related to circadian rhythms, we set up nude mouse xenograft models. HeLa cells were injected in nude mice and nude mice were moved to two different cases, one case is exposed to a 24-hour light cycle (L/L), the other is a more “normal” 12-hour light/dark cycle (L/D). We found a significant increase in tumor volume in the L/L group compared with the L/D group. In addition, tumor microvessels and stroma were strongly increased in L/L mice. Although there was a hypervascularization in L/L tumors, there was no associated increase in the production of vascular endothelial cell growth factor (VEGF). DNA microarray analysis showed enhanced expression of WNT10A, and our subsequent study revealed that WNT10A stimulates the growth of both microvascular endothelial cells and fibroblasts in tumors from light-stressed mice, along with marked increases in angio/stromagenesis. Only the tumor stroma stained positive for WNT10A and WNT10A is also highly expressed in keloid dermal fibroblasts but not in normal dermal fibroblasts indicated that WNT10A may be a novel angio/stromagenic growth factor. These findings suggest that circadian disruption induces the progression of malignant tumors via a Wnt signaling pathway.
Introduction
Modern lifestyles and the use of indoor lighting mean that many people are exposed to a long photoperiod throughout the year [1]. This is most evident in shift workers, especially night workers. This results in the disruption of circadian rhythms, which is known to induce many different types of stress [2]. Abnormal circadian rhythms, including exposure to light at night, are associated with a higher cancer risk and a poorer prognosis [3][4][5][6][7], which may be one of the reasons that the incidence of cancer is increasing in individuals subjected to these stresses. Circadian genes have been shown to function as oncogenes or tumor suppressors at both the systemic and cellular levels due to their roles in cell proliferation, cell cycle regulation, apoptosis and DNA damage signaling pathways [8,9]. However, the molecular or systemic mechanisms involved in tumor growth under artificial illumination stress conditions have not been identified. In fact, the question of whether artificial illumination stress promotes tumor growth at all is still controversial [10,11]. To identify the possible mechanisms underlying tumor progression related to circadian rhythms, we set up nude mouse xenograft models and revealed that artificial light stress induced tumor growth and angio/stromagenesis through WNT10A overexpression.
Circadian disruption induces tumor growth and angio/ stromagenesis
The mice were divided into two groups: one group was exposed to 24-hour periods of artificial light (L/L) the other was exposed to a more conventional 12-hour light/dark cycle (L/D). First, we examined the effect of light stress on the in vivo growth of epidermoid cancer (HeLa) cell tumors and found a significant increase in tumor volume in the L/L group compared with the L/ D group ( Figure 1A). Similar results were obtained using a xenograft model incorporating prostate cancer (PC3) cells ( Figures 1B). Examples of the Hela cell tumors in the L/L and L/D groups are shown in Figure 1C and Figure S1. The L/L tumors were not only larger, but also immunohistochemical analysis showed them to be highly vascular, with increased numbers of CD34 positive (CD34 + ) and a-Smooth Muscle Actin (a-SMA) positive (a-SMA + ) cells ( Figure 1D). High vascularity of tumor surface in L/L group using HeLa cells was reproducibly observed in four independent experiments. Also, the microvessel density within the L/L tumors was significantly higher than that in the L/D tumors and correlated with a reduction in the amount of necrosis ( Figure 1E). Masson trichrome staining of the tumor stroma showed a clear expansion of the extracellular matrix (ECM; stained blue) in the L/L tumors not seen in the L/D tumors ( Figure 1F). The immunostaining of mouse Type I collagen also showed the increase of ECM in the L/L tumors ( Figure 1F). Taken together, these results clearly show that abnormal circadian rhythms induce marked tumor growth accompanied by increased angio/stromagenesis.
Microarray analysis of L/L tumors and L/D tumors
Next, we wanted to investigate the molecular mechanisms underlying the striking morphological differences between L/L and L/D tumors. Whole-genome expression DNA microarray analysis was performed to identify the genes and biological pathways that might be regulated by photoperiod manipulation. We found that 201 genes were transcriptionally upregulated in the L/L tumors compared with the L/D tumors (Table S1). Surprisingly, the expression of human VEGF-A and VEGF-B, which are the most important molecules in cancer angiogenesis, was the same in L/L and L/D tumors (Table S1 and Figure 2A), suggesting that a novel angiogenic factor is involved in increased L/L tumor growth. We focused on genes encoding secretory proteins (Table 1) and found a greater than 9-fold upregulation in the expression of WNT10A in L/L tumors compared with L/D tumors. We designed human WNT10A and mouse Wnt10a specific primers for semi-quantitative RT-PCR analysis and checked the specificity of the primers (Figures S2A and Table S2). Semi-quantitative RT-PCR showed that not only human WNT10A, but also mouse Wnt10a, was upregulated in L/L tumors ( Figure 2A); however, the expression of human WNT10A in the L/L tumors was still very low as it could only be detected using nested techniques (1st PCR 30 cycles and 2nd PCR 35 cycles). Immunohistochemical analysis showed WNT10A expression mainly around the blood vessels and it was increased in L/L tumors compared with L/D tumors (arrows in Figure 2B), indicating that this enhanced expression of WNT10A is derived from mouse tissues.
WNT10A overexpression cells induce tumor growth, angiogenesis and stromagenesis in vivo xenograft models
To further investigate the role played by WNT10A in these morphological changes, we established another nude mouse model implanted with HeLa cells overexpressing WNT10A ( Figures 3A and S3). Because the growth rate of these WNT10A-overexpressing cells was similar to that of control cells in vitro ( Figure 3B), we were surprised to see that the growth rate of the implanted WNT10A-overexpressing tumors was faster than that of control tumors ( Figure 3C). Furthermore, as shown in Figure 3D, most of the tumors were hypervascular; even those from mice housed under L/D conditions. Immunohistochemical analysis of these WNT10A-overexpressing tumors showed increased numbers of a-SMA + cells coupled with increased size, microvessel density, significantly reduced areas of necrosis ( Figures 3E and 3F) and an expanded ECM ( Figure 3G).
WNT10A is expressed in fibroblasts and WNT10A stimulates the growth of both fibroblasts and vascular endothelial cells in vitro
Based on these results, we hypothesized that WNT10A was functioning as a growth factor for both vascular endothelial cells and fibroblasts and was involved in a novel mechanism of tumor growth, possibly via the promotion of angio/stromagenesis. To confirm this hypothesis, we used RT-PCR to show that normal human dermal fibroblasts (NHDF) cells express WNT10A, but normal human dermal microvascular endothelial (HMVEC-d) cells do not ( Figure 4A). This suggests the presence of a WNT10Adependent autocrine growth system in fibroblasts. Cell proliferation analysis showed that the growth of both HMVEC-d and NHDF cells was stimulated by the addition of conditioned medium from WNT10A-overexpressing cells and was significantly inhibited by the addition of an anti-WNT10 antibody ( Figures 4B and 4C). NHDF cells cultured in recommended medium were also effectively inhibited by the addition of the anti-WNT10A antibody ( Figure 4D). In addition, knockdown of WNT10A-expression using siRNA inhibited the growth of NHDF cells ( Figures 4E and 4F), confirming the existence of a WNT10A-dependent autocrine growth mechanism.
Tumor stroma cells express WNT10A
The pattern of WNT10A expression in human tumors was examined by immunohistochemistry ( Figures 5A and 5B). A careful examination of the double stained tissues showed marked increase of WNT10A positive fibroblastic cells in scirrhous type gastric cancer which is a representative cancer with hyperplastic WNT10A is expressed in keloid stroma, but not in normal dermal stroma The inducible expression of Wnt genes, including WNT10A, stimulates the proliferation of hepatic progenitor cells [12], and mutations in WNT10A are associated with an autosomal recessive ectodermal dysplasia [13,14]. In addition, The expression of Wnt signaling antagonists has been shown to be down-regulated in keloid, which is an aggressive wound healing tissue, fibroblasts [15,16]. These previous reports indicate that WNT signaling is involved in both tissue repair and wound healing. Because an old hypothesis suggests that cancer results from uncontrolled woundhealing [17], we investigated WNT10A expression in keloid tissue. WNT10A-positive cells were only found in the vessels and peripheral nerves of normal skin ( Figure 6A). On the other hand, WNT10A expression markedly increased in fibroblastic cells in the hyperplastic stroma of keloid tissue ( Figure 6B), suggesting that WNT10A functions as an angio/stromagenesis gene in tumor progression, thus supporting the ''old'' hypothesis. Although WNT10A expression was observed in cutured normal human dermal fibroblasts ( Figure 4E), WNT10A expression was not in fibroblast in normal skin ( Figure 6A). This may be probably due to the sensitivity of immunostaining analysis. Another possibility is that normal human dermal fibroblasts were cultured with growth factors which may induce the WNT10A expression.
Oxidative stress induce WNT10A expression
The level of psychological and physiological stress experienced by the mice is hard to measure experimentally. So, we measured 8-OH deoxyguanosine (8-OH-dG) associated with increased levels of oxidative stress. We found that the level of 8-OH-dG in lung tissue, but not liver, from L/L mice was significantly higher than that in L/D mice ( Figure 7A). This is consistent with the fact that lung tissue is more sensitive to oxidative DNA damage than other tissues [18]. This data strongly suggest an association between disruption of circadian rhythms and increased oxidative stress responses. A preliminary study also showed that the promoter activity of the WNT10A gene was induced by the oxidizing agent, hydrogen peroxide and Wnt10a mRNA transcript level was also increased in NIH3T3 cells treated with hydrogen peroxide ( Figures 7B and 7C). These provide further evidence supporting the role of oxidative stress in tumor promotion and progression.
Discussion
Greater understanding of the complexity of the tumor microenvironment, and the role of tumor angiogenesis, will lead to further advances in cancer treatment [4,19,20]. The results presented in this paper are both interesting and unexpected and strongly suggest that disruption of circadian rhythms promotes tumor growth through WNT10A-dependent angio/stromagenesis resulting from increased levels of oxidative stress. The transcriptional factor of NF-kB is activated by oxidative stress or tumor necrosis factor alpha (TNF-a) [21]. WNT10A has been shown to be one of the NF-kB target genes and it's expression was induced by TNF-a [22,23]. Since there is one NF-kB site in the promoter region of WNT10A gene, it is conceivable that WNT10A might be regulated by NF-kB pathway. WNT signaling pathway has been implicated in angiogenesis [24] and tumor stroma microenvironment [25]. These data suggest that both endothelial cells and stromal cells are activated by WNT signals from cancer cells. On the other hand, our data indicate that both endothelial cells and stromal cells may be activated by WNT10A signals from non tumor cells, such as cancer associated fibroblasts. WNT signaling has been separated into a ''canonical'' pathway or ''noncanonical'' pathways [26]. Since canonical WNT signaling pathway stabilize b-catenin, we hypothesized that WNT10A might also stabilize b-catenin. The expression of b-catenin was observed in the endothelial cells of newly formed tumor vessels ( Figure S4), suggesting that Wnt/b-catenin signaling plays a role in tumor angiogenesis. WNT signaling is also known to play an important role in cancer and stem cell biology [27], indicating that WNT10A might affect not only the tumor microenvironment, but also stem cells themselves.
There are some limitations in this study. We cannot exclude the possibility that other physiological and/or hormonal factors, such as melatonin, affected the growth of the implanted cancer cells in our mouse models [28][29][30]. Subcutaneous injection of rapidly growing human cancer cells into nude mice provided a setting in which tumor growth could be assessed in a relatively short time span. Orthotopic model will be a better way to confirm our results because it more accurately reproduces the interactions between tumor cells and their microenvironment [31]. Nevertheless, our data clearly show that WNT10A has angio/stromagenic activity. Further analysis is required to clarify whether WNT10A-Frizzled binding mediates cell proliferation in both endothelial cells and stromal cells. Examining WNT10A receptors and associated signal transduction pathways may provide valuable insights into the role of circadian rhythms in tumor progression [32,33]. Our findings not only support the emerging links between circadian rhythm, oxidative stress and tumor progression at the molecular level, but also warn of the adverse effects of artificial light.
Primary cells, cell lines and culture conditions
HMVEC-d and NHDF cells were purchased from Lonza Co. HMVEC-d and NHDF cells were maintained with EGM-2-MV BulletKit and FGM-2 BulletKit (Lonza Co), respectively. HMVECd cells were cultured in endothelial cell basic medium (EBM) containing 5% FBS and a growth factor mixture containing hydrocortisone, ascorbic acid, FGF, VEGF, IGF, EGF and gentamycin. NHDF cells were cultured in fibroblast basic medium (FBM) containing 2%, FBS containing the appropriate growth factors (insulin, FGF, and gentamycin). The human prostate cancer cell line PC3 was kindly gifted by Dr M Nakagawa (Kagoshima University, Kagoshima, Japan) [34]. Although HeLa cell line was kindly gifted by Dr S Akiyama (Tokushima University, Tokushima, Japan) as human epidermoid cancer KB cell line [35], we carried out STR profiling at National Institute of Biomedical Innovation in Japan and revealed that KB cell line is same as Hela cell line. Mouse fibroblast NIH3T3 cell line was obtained from the Japanese Cancer Research Resources Bank (JCRB) [36]. HeLa cells and human prostate cancer PC3 cells were cultured in Eagle's minimal essential medium as described previously [37,38]. NIH3T3 cells were cultured in Dulbecco's modified Eagle's minimal essential medium. These mediums were purchased from Nissui Seiyaku (Tokyo, Japan) and contained 10% fetal bovine serum. Cell lines were maintained in a 5% CO 2 atmosphere at 37uC.
Anti-WNT10A antibody
A polyclonal antibody was raised against WNT10A by multiple immunization of a New Zealand white rabbit with synthetic peptides. The synthetic peptide sequences were RKLHRLQL DALQRGKGLSHGVPEHPALPC (aa 172-199) and CGGQL EPGPAGAPSPAPGAPGPRRRASPA (aa 307-334). This antibody was used for the Western blot and cell proliferation assays. For the cell proliferation assays, antibodies were purified from both control and WNT10A antisera using protein G columns (Mab Trap, Amersham Pharmacia Biotech).
Mouse studies
All protocols were approved by the Ethics Committee of Animal Care and Experimentation, University of Occupational and Environmental Health (admission number; AE-07039), and were performed according to the Institutional Guidelines for Animal Experiments and to Law (number 105) and Notification (number 6) of the Japanese government. All surgery was performed under anesthetization (mixture of ketamine 50 mg/kg and medetomidine 1 mg/kg), and all efforts were made to minimize suffering. Eightweek-old male nude mice (BALB/c nu/nu; Kyudo Co.) were used for subcutaneous xenografting. Mice were injected with 100 ml (1610 6 cells) of Hela cells or PC3 cells suspension at two separate dorsal sites. The subcutaneous xenegrafting experiments were carried out four times for HeLa cells and twice for PC3 cells. Mice were randomly caged (5/cage) and subdivided into L/L and L/D groups. Tumor volume was measured using the two principal perpendicular diameters: V = length (mm) 6 (width (mm)) 2 61/2.
Preparation of human tissue samples
Human normal skin, keloid tissue and cancer samples from different organ were examined in the Department of Pathology and Cell Biology at University of Occupational and Environmental Health in Kitakyushu, Japan. The diagnosis was re-evaluated and confirmed by at least three board-certified surgical pathologists who had examined formalin-fixed, paraffin-embedded tissue sections stained with haematoxylin and eosin (H&E) or other appropriate immunohistochemical stains. DNA microarray analysis and RT-PCR DNA microarray analysis was performed using 3-DGene (Toray Industries). All data is MIAME compliant and that raw data has been deposited in a MIAME compliant database (GSE23969). Only one tumor from each L/D and L/L group which represent the typical look of tumors size and color was used for RNA preparation in same experiment. Total RNA was isolated from tumors and cultured cells using QIAshredder and RNeasy-Mini kits (Qiagen). RT-PCR was performed using the Qiagen OneStep RT-PCR kit. The primers used in this study are listed in Table S2. Cycle number is 30 excluding some exceptions. The cycle number of these exceptions is listed in each figure legend. Human specificity of h-WNT10A primers is shown using NHDF cells ( Figure 4A) and mouse specificity of m-Wnt10a primers is shown using NIH3T3 cells ( Figure S2A). Specificity of human and/or mouse b-actin primers is shown using Hela cells and NIH3T3 cells ( Figure S2B).
Plasmid construction
WNT10A cDNA was constructed by PCR using a superscript cDNA library (Invitrogen) ( Table S2). The PCR product was cloned into the pGEM-T easy vector (Promega) and the full-length cDNA fragment was recloned into the pcDNA3.1 vector (Invitrogen). To prepare the reporter plasmid containing the promoter region of the human WNT10A gene, PCR of human genomic DNA was performed using the appropriate primers (listed in Table S2). The PCR product was then cloned into the pGL3-basic vector (Promega).
Cloning of stable transfectants
HeLa cells were transfected with pcDNA3.1-WNT10A using the Effectene reagent (Qiagen) and cultured with 500 mg/ml hygromycin for 15-20 days. The resulting colonies were isolated and the cellular expression levels of WNT10A in each clone analyzed by Western blotting with an anti-WNT10A antibody.
Western blotting analysis
Whole-cell lysates were prepared as previously described [38,39]. The 100 mg of whole-cell lysates were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to polyvinylidene difluoride (PVDF) microporous membranes (Millipore, Billerica, MA, USA) using a semi-dry blotter. The blotted membranes were treated with 5% (w/v) skimmed milk in 10 mM Tris, 150 mM NaCl and 0.2% (v/ v) Tween 20, and incubated for 1 h at room temperature with the primary antibody. The following antibodies and dilutions were used: 1:1,000 dilution of anti-WNT10A and 1:5,000 dilution of anti-b-actin. The membranes were then incubated for 45 min at room temperature with a peroxidase-conjugated secondary antibody, visualized using an ECL kit (GE Healthcare Bio-Science, Buckinghamshire, England, UK). The detection was performed with LAS-4000 mini (FUJIFILM).
Conditioned Media (CM)
Stable transfectants were cultured in MEM containing 10% FBS until they formed confluent monolayers. The MEM was then replaced with either conditioned EBM (all growth factors and FBS are 0.1 fold compared to normal EBM) or conditioned FBM (insulin, FGF and FBS are 0.1 fold compared to normal FBM) for 24 hours, after which the medium was collected. The CM was then centrifuged and filtered to remove cells and debris. Control-CM was prepared from the culture medium of growing control-cl2 cells, and WNT10A-CM was prepared from the cultured medium of growing WNT10A-cl25 cells.
Cell Proliferation Assays
WNT10A-overexpressing cell lines and control cell lines were seeded in 12-well plates and counted every 12 hours. NHDF cells were seeded in 12-well plates and transfected with siRNA as described above. For the purposes of analysis, ''0 hours'' was taken to be 12 hours post transfection. The cells were harvested with trypsin and counted with a Coulter-type cell size analyzer (CDA-500, Sysmex). BrdU was incorporated using a cell proliferation ELISA kit (Roche Diagnostics).
Luciferase assay
Transient transfection and luciferase assays were performed as previously described [40]. Briefly, PC3 cells (1610 5 ) were seeded into 12-well plates and, one day later, transfected with the WNT10A reporter plasmid using the Superfect reagent (Qiagen). Finally, the cells were incubated under normal culture conditions, or in the presence of 10 mmol/L (10 mM) H 2 O 2 . Forty-eight hours post-transfection, the cells were lysed with reporter lysis buffer (Promega) and luciferase activity was detected using a Picagene kit (Toyoinki). The results shown are normalized against protein concentrations measured using the Bradford method and are representative of at least three independent experiments.
Measurement of 8-hydroxydeoxyguanosine
The amount of 8-hydroxydeoxyguanosine (8-OH-dG) present in the cellular DNA was measured using a high performance liquid chromatography (HPLC)-electrochemical detector (ECD) system as previously described [41]. The final 8-OH-dG value was calculated as the number of 8-OH-dG residues/10 6 dG residues.
Statistical analysis
We compared continuous variables with repeated measure analysis of variance (ANOVA), and differences between groups were determined by Scheffe's test. Student t test was used for statistical analysis of the variables between the two groups. All error bars indicate standard deviation. | 4,795 | 2010-12-23T00:00:00.000 | [
"Biology",
"Medicine"
] |
Diffusive skin effect and topological heat funneling
Non-Hermitian wave system has attracted intense attentions in the past decade since it reveals interesting physics and generates various counterintuitive effects. However, in the diffusive system that is inherently non-Hermitian with natural dissipation, the robust control of heat flow is hitherto still a challenge. Here we introduce the skin effect into diffusive systems. Different from the skin effect in wave systems, where asymmetric couplings were enabled by dynamic modulations or judicious gain/loss engineering, asymmetric couplings of the temperature fields in diffusive systems can be realized by directly contacted metamaterial channels. Topological heat funneling is further presented, where the temperature field automatically concentrates towards a designated position and shows a strong immunity against the defects. Our work indicates that the diffusive system can provide a distinctive platform for exploring non-Hermitian physics as well as thermal topology. Non-Hermitian systems can house a range of unusual physical phenomena such as the skin-effect which has been typically observed in optical lattices and electrical circuits. Here, the authors show the non-Hermitian skin effect in thermal transport, demonstrating that the topologically protected heat flow can be realized by using thermal metamaterials.
N on-Hermitian Hamiltonian plays a core role to describe systems that exchange energy with the environment. In wave systems, the non-Hermitian physics has been explored intensely in the past decade [1][2][3][4][5] . For example, the paritytime symmetric system that operates at an exceptional point (EP) can realize unidirectionally invisible cloak 6 , single mode laser 7 , one-way mode switcher 8,9 , high-sensitivity sensor 10 , and wireless power transfer 11,12 . The dynamically modulated systems with a broken time-reversal symmetry can produce various novel gyrators, isolators and circulators 13 . Besides, an intriguing "skin effect" was recently discovered in non-Hermitian systems, where all the bulk states localize at the edges under the open boundary condition (OBC), which modifies the bulk-boundary correspondence in Hermitian cases [14][15][16][17][18][19][20][21][22] . The non-Hermitian skin effect shows application prospects in the nonreciprocal energy manipulation and has been demonstrated in various fields such as optical lattices 23 , mechanical systems 24 , quantum-walk networks 25 , and electrical circuits 26 , etc. In diffusive systems that are non-Hermitian inherently, the thermal metamaterials advanced rapidly as it enables various agile and flexible manipulation of heat flow 27,28 . Based on the theory of transformation thermotics, fruitful progresses have been achieved, such as thermal cloaks that conceal objects in heat conduction or radiative camouflages against infrared detection [29][30][31] , thermal diodes and concentrators that isolate and harvest heat directionally [32][33][34][35] . Since the diffusive system is dissipative, it provides a natural platform to investigate the non-Hermitian physics, for example, the observation of antiparity-time symmetric phase transition at an EP [36][37][38] . The Hamiltonian respecting anti-parity-time symmetry can be constructed by imposing the convection to diffusive systems, which generates dynamically "stopped" temperature fields 36 . However, the diffusive counterparts of asymmetric coupling and skin effect have not been discovered hitherto. Due to the symmetric energy exchanges in general cases, breaking the time-reversal symmetry with temporal modulations 23 and judicious gain/loss engineering of couplings 26 were considered the few-in-number choices.
In this work, we show that it is possible to realize asymmetric coupling and skin effect in the diffusive systems by directly contacting thermal metamaterials. Different from the cases in wave systems, the effective non-Hermitian Hamiltonians describing asymmetric couplings in diffusive systems are imaginary due to the dissipative nature, as shown in Fig. 1a. Our finding starts from the fact that heat and temperature variations during the energy exchange are not equivalent. Temperature evolution is actually asymmetric between different components. As a result, we realize the diffusive skin effect, which shows that the eigenstates in all bulk bands can become localized edge states under open boundaries. This is fundamentally different from the Hermitian case in which only the edge modes can have boundary localizations. Unlike the non-Hermitian skin effects in the wave systems, where the asymmetric couplings were implemented by temporal modulations or tailored gain/loss couplings, asymmetric coupling of temperature fields can be easily enabled by the directly contacted thermal metamaterials. We further propose an approach to construct a periodic non-Hermitian Hamiltonian in the parameter space from an aperiodic structure. On this basis, the skin-effect-induced heat funneling is showcased, where temperature fields concentrate to the designated position, being regardless of the initial condition and showing a strong immunity against the defects, as schematically shown in Fig. 1b. The robust temperature fields with nontrivial gradients can be useful for thermoelectric power generation or heat harvesting 39 . Moreover, the high sensitivity of diffusive skin effect to the change of boundary conditions makes it possible to achieve topological thermal sensing 40 .
Results
Directly asymmetric coupling. In order to understand the asymmetric coupling diffusive systems intuitively, we can start from a double-ring toy model that was experimentally demonstrated recently 36 . As shown in Fig. 2a, where two rings are vertically coupled in z direction through an interlayer. According to the Fourier's law in heat conduction, the coupling equations can be written as 36-38 where T 1 (T 2 ) is the temperature field of upper (lower) channel and D 1 ¼ κ 1 ) is the diffusivity of the upper (lower) channel with κ 1 (κ 2 ), ρ 1 (ρ 2 ) and C 1 (C 2 ) being the thermal conductivity, mass density and heat capacity. x is the position along the channel. The heat exchange rate of the upper (lower) channel is where κ i is the thermal conductivity of the coupling layer, b and d are the thicknesses of the ring channel.
As the temperature field in the ring structure is spatially periodic, we can assume that Eq. (1) has a wave-like solution of In the solution, A 1;2 is the amplitude, β is the propagation constant, ω 1;2 is the complex frequency, and T 0 is the reference temperature that is set to be zero for simplicity. Here we employ the temperature gradient and position of maximum temperature to represent the amplitude and phase of the wave-like solution. As all the ring channels have the same size, the propagating constants of circulating "heat wave" in them are equal to be β ¼ m 2π 2πR ¼ m R , where m is the mode order and R is the radius of the ring. Usually, only the first-order mode is taken into consideration, since it is very stable and can be selectively excited. Temperature variations in each channel can be regarded as uniform on condition that the layers are thin enough. As a result, the heat transfer in these channels possess coherent properties. Substituting the wave-like solution into Eq. (1) and considering the boundary continuity condition, we can deduce an effective Hamiltonian about A 1 ; A 2 À Á T 36-38 where S 1 ¼ Àðβ 2 D 1 þ h 21 Þ and S 2 ¼ Àðβ 2 D 2 þ h 12 Þ. In Eq. (2), the Hamiltonian of the diffusive system is imaginary, which is different from the one in the wave system (real), as shown in Fig. 1a. Moreover, when the material parameters ρ 1 C 1 ≠ρ 2 C 2 , the ring coupling becomes asymmetric with ih 21 ≠ih 12 . It needs to be pointed out that the main diagonal elements in Eq. (2) can be unified into iS 0 by adjusting the diffusivities D 1;2 along with h 21;12 , where ih 21;12 are replaced by ic 21;12 in Fig. 1a. Eigenvalues and eigenvectors of Eq. (2) are solved to be ω ± ¼ We can find the amplitudes in channels are asymmetric in the steady state, meanwhile, they are always time dependent with the factor of e Àiω ± t due to the system is dissipative. Therefore, our structure does not require dynamic modulation or gain-loss engineering to achieve asymmetric couplings.
Diffusive skin effect and non-Bloch band theory. For implementation, we can take the densities of directly coupled channels varying with a ratio of a 2 . Figures 2a and 2b display the asymmetric coupling temperature fields with different a, respectively. In order to realize the non-Hermitian diffusive Su-Schrieffer-Heeger (SSH) model, we can connect the asymmetric coupling unit cells with symmetric couplings in Fig. 2c 16 . Each unit cell consists of sublattices (A+B), where the asymmetric intracell couplings and symmetric intercell coupling are i h 1 ± δ À Á and ih 0 , respectively. For the open boundary condition, the Hamiltonian is 16,20 where y l andB y l are the creation operators of A and B sites in the lth period. For the periodic boundary condition (PBC), the Blochmode Hamiltonian in momentum space is expressed into which respects the sublattice symmetry ofσ À1 z ðH PBC K ð Þ À iS 0Î Þσ z ¼ ÀðH PBC K ð Þ À iS 0Î Þ withσ x;y;z the Pauli matrices and K the Bloch vector 19 . Eigenvalues of Eq. (4) can be deduced where d x ¼ h 1 þ h 0 cosK and d y ¼ h 0 sinK. When the complex spectrum at the zero-energy level, viz., ω ± K ð Þ À iS 0 ¼ 0, we can obtain EPs at which all eigenstates degenerate (the sign '×' in Fig. 2d). However, the EPs deduced from the PBC case (h 1 ¼ 2:05) contradict the OBC one (h 1 ¼ 1:45), which indicates the bulk boundary correspondence breaks down (the complex spectrum of PBC case can be found in Supplementary Note 1) 16,20 . Back to the OBC case, where the eigen-equation and the Bloch phase factor e iK ! α ¼ re iK 16,20 . According to the non-Bloch band theory, the nearest neighbor coupling of temperature fields can be further transformed into where ϕ A;B ¼ α l ϕ A l ;B l are the eigenstates in lth unit cell and degenerate at ω l À iS 0 ¼ 0 (also at the EPs) with the generalized phase factor Therefore, EPs locate at the hyperbolic curve The corresponding amplitudes of eigenstates for PBC and OBC at the EP are shown in Fig. 2e and Fig. S2 (in Supplementary Note 2).
Here the EP is also a topological phase transition point and the topological phase diagram is shown in the "Methods".
Heat funneling effect. Figure 3a presents the heat funnel model with two mirrored SSH chains splicing at the designated interface. In our case, there are 5 periods for the left chain and 1 period for the right chain. The thicknesses of ring channels and interlayers are b and d. The internal and external radii of each channel are R 1 and R 2 with R 1 % R 2 . We define the propagation constant of temperature fields as β ¼ 1 The thermal conductivities, mass densities and capacities of thermal metamaterials for ring channels and coupling interlayers are ðκ n ; ρ n ; C 0 Þ and ðκ in ; ρ 0 ; C 0 Þ, where the channel number n indicates that the associated material parameters are varied. The coupling coefficients between channels n and n þ 1 can be expressed into ih n;nþ1 ¼ i κ in ρ nþ1 C 0 bd and ih nþ1;n ¼ i κ in ρ n C 0 bd for the forward and backward couplings, respectively. In order to keep the couplings in the non-Hermitian diffusive SSH model respecting the translation symmetry in parameter spaces, the parameters ρ n and κ in of thermal metamaterials should be arranged as ρ n ¼ a nÀ1 ρ 0 ; n ¼ 1; 3; 5; a n ρ 0 ; n ¼ 2; 4; 6; ( ; κ in ¼ a n κ i0 ; n ¼ 1; 2; 3; 4; ; which is shown by the setting of the heat funnel model in Fig. 3b. However, it is difficult to find the natural materials that satisfies these gradient parameters in implementation. According to the effective medium theory, we can realize the effective densities of channels and the effective thermal conductivities of coupling interlayers with composite metamaterials. In this way, the effective material parameters can be satisfied by modulating the doping rates of materials with highly contrasted properties (such as Cu and Polydimethylsiloxane). Here the asymmetric intracell couplings are ih 0 a and ih 0 a, with the symmetric intercell coupling ih 0 , where h 0 ¼ κ i0 ρ 0 C 0 bd . From the semi-infinite model in Fig. 2c, we can easily obtain h 1 ¼ 1 : However, this cannot lead to the equivalent tight-binding model in Fig. 3c, if the diffusivity takes the form of D n ¼ κ 0 ρ n C 0 (the diamond dots in Fig. 3d). The eigenfrequency of iS n ¼ Àiðβ 2 D n þ h n;nþ1 þ h nþ1;n Þ should be uniformed. Here we adjust D n in the way shown by the square dots in Fig. 3d to homogenize iS n into iS 0 ¼ Ài The Hamiltonian of non-Hermitian SSH model can be derived by replacing K with K þ ilna and the eigenvalues are deduced as ω ± K ð Þ À iS 0 ¼ ± i2h 0 cos K 2 . It is interesting to find that the system operates exactly at the topological transition point with a gapless band for any value of a. The generalized Bloch band is plotted in Fig. 3e, where the blue dots denote EPs.
We construct the 2D heat funnel model for simplicity in the theoretical analysis and the full wave simulations here. The structural parameters are chosen as b ¼ 5 mm, d ¼ 1 mm, and R 1 % R 2 ¼ 100 mm. The material parameters are set to be ρ 0 ¼ 1000 kg m À3 , C 0 ¼ 1000 J kg À1 K À1 , κ 0 ¼ 100 Wm À1 K À1 , and κ i0 ¼ 5 W m À1 K À1 . In Figs. 4a, c, we show the imaginary eigenvalues ÀIm ω and the eigenmodes ϕ n À Á at a ¼ 1; 0:4. The result reveals that the temperature gradients of all eigenmodes are almost evenly distributed in the channel array at a = 1. While at a = 0.4, the temperature fields will localize in the interface channels. For experiments, temperature gradient in the channel can be measured by the difference between maximum and minimum temperature values with an infrared camera. The normalized steady-state temperature gradient in each channel corresponds to the eigenmode solved from the Hamiltonian. As heat transfer is inherently dissipative and it normally needs much time for the temperature field evolving into the steady state, the initial temperature gradient should be given larger (for example, 273 K for the cold-side cooling and 320 K for the hot-side heating).
To testify the topological heat funneling, we study the temperature field evolutions in a symmetric coupling structure at a ¼ 1, where κ i0 is set to be 0:3Wm À1 K À1 . Imposing a random stimulation input, the temperature field will naturally concentrate toward the central channels, since the eigenmode of the lowest decay rate is excited and observed. In Fig. 5a, the simulated evolutions of temperature fields agree well with the tight-binding model, since the normalized temperature gradient of each channel is consistent with the theoretical analysis. However, for the asymmetric coupling case at a ¼ 0:4, the temperature field tends to funnel to the designed interface at channels 10 and 11 in Fig. 5b, regardless of initial conditions. Note that the heat funneling is robust against the defects with different coupling strengths, channel numbers, and interface locations (disorder analysis can be found in Supplementary Note 3). For example, it shows the robustness when channels 2 and 3 are set with an asymmetric coupling defect (a ¼ 2:5) in the model, which is different from the case in symmetric coupling systems where the temperature field localizes at the defect in Figs. 5c and 5d. The evolutions of temperature field along z axis are also presented in Figs. 5e and 5f, which validates the concept of heat funneling effect schematically displayed in Fig. 1b. Above discussions are based on the gapless system with equivalent intracell and intercell couplings. Without the loss of generality, we also investigate the gapped diffusive system with the intercell coupling much larger than asymmetric intracell couplings. In this case, the temperature field will concentrate at the channel 12, since the heat flow directions of the inverted structures change to be the same (Supplementary Note 4 and Fig. S4).
Conclusion
In summary, we investigate the physical mechanisms of diffusive skin effect and topological heat funneling. Unlike the cases in wave systems, the diffusive skin effect and heat funneling can be realized in a static framework such as directly contacted thermal metamaterials. The periodically driven Hamiltonian can be constructed in the parameter space with an aperiodic structure. Our results show that the diffusive system provides a distinctive platform to explore the topological physics and non-Hermitian dynamics. Our work is expected to inspire further exploration of other intriguing effects, including the higher-order diffusive skin effect, topological heat flow transfer, high-efficiency thermoelectric effect 39 , heat harvesting, and thermal sensing 40,41 , etc.
Methods
EPs in the topological phase. As the transformation factor and h 0 ¼ h 0 , from which we deduce The topological invariant (winding number W) is calculated by Figure 6 shows the topological phase diagram of the semi-infinite SSH model, where the EPs locate at the hyperbolic boundaries.
From Eq. (6) we also generate As h 1 þ δ ¼ h 0 a and h 1 À δ ¼ h 0 a, Eq. (11) can be simplified as and the coupling equation in real space is According to the diffusive non-Bloch band theory, Eq. (14) can be transformed into a standard form with phase factor e iK ! α ¼ re iK At the zero-mode, Obviously, EPs appear at δ ¼ h 1 where the system is in absolutely asymmetric coupling. The temperature field is exponentially localized at the right boundary with the rate α ± , when 0 < δ < h 1 . In Fig. S6 (Supplementary Note 6), we show the diffusive skin effect in the Hatano-Nelson model with densities of adjacent channels and thermal capacities of interlayers having the same gradient of a 2 .
Data availability
All relevant data are available from the corresponding author X.F.Z. upon request.
Code availability
The code is available from the corresponding author X.F.Z. upon reasonable request. | 4,352.6 | 2021-10-20T00:00:00.000 | [
"Physics"
] |
Oxidative Stability of Stripped Soybean Oil during Accelerated Oxidation: Impact of Monoglyceride and Triglyceride—Structured Lipids Using DHA as sn-2 Acyl-Site Donors
The current work aimed to clarify the effects of four structured lipids, including monoglycerides with docosahexaenoic acid (2D-MAG), diacylglycerols with caprylic acid (1,3C-DAG), triglyceride with caprylic acid at sn-1,3 and DHA at sn-2 position (1,3C-2D-TAG) and caprylic triglyceride on the oxidative stability of stripped soybean oil (SSO). The results revealed that compared to the blank group of SSO, the oxidation induction period of the sample with 2 wt% 2D-MAG and that with 1,3C-DAG were delayed by 2–3 days under accelerated oxidation conditions (50 °C), indicating that 2D-MAG and 1,3C-DAG prolonged the oxidation induction period of SSO. However, the inhibitory effect of α-tocopherol on SSO oxidation was reduced by 2D-MAG after addition of 2D-MAG to SSO containing α-tocopherol. 2D-MAG exhibited different antioxidative/pro-oxidative effects in the added/non-added antioxidants system. Compared to caprylic triglyceride, DHA at the sn-2 acyl site induced oxidation of structured lipids, thus further promoting the oxidation of SSO. The antioxidant was able to inhibit not only the oxidation of DHA in the SSO, but also the transesterification of sn-2 DHA to sn-1/sn-3 DHA in the structured lipid.
Introduction
Structured lipids (SLs) are a class of glycerides with specific molecular structure or function that chemically or enzymatically change the fatty acid (FA) composition or position distribution of the glycerol skeleton. The differences in structured lipids include not only the differences in the three FA species linked to the glycerol skeleton, but also the specific localization of FA on the glycerol skeleton (sn-1 and sn-3 positions outside the glycerol, or sn-2 sites in the middle). As fatty acids with special nutritional or physiological functions are attached to specific positions in the glycerol skeleton, structured lipids can utilize the advantage of the functions of various FAs, in addition to some or all properties of natural oils. As a new type of functional lipid, structured lipids contribute to digestion, energy absorption, timely supply of energy, and reduction of calories, and are important for nutrition and health care [1,2], because of which they are being widely utilized in the food [3], medicine [4], health products [5], and other industries.
Stripped Soybean Oil Preparation
The method of stripped soybean oil was mainly adopted from Waraho et al. [18] with some modifications. All the glassware and sample bottles used in the experiment were soaked in 3 mM HCl solution overnight to remove the transition state ions, and then repeatedly immersed in double distilled water for 4 h for rinsing. Soybean oil was eluted with n-hexane to separate small polar components (e.g., tocopherols, free FAs, mono-, diacylglycerols, and phospholipids) that may interfere with the oxidative stability of the oil using a column of silicic acid and activated carbon. The 35 cm long chromatographic column with inner diameter of 3.0 cm was successively packed with 22.5 g silicic acid, 5.625 g activated carbon, and 22.5 g silicic acid, such that the packing material of the chromatographic column was divided into three layers. The chromatographic column was wrapped with aluminum foil to avoid oxidation of oil by light. Soybean oil (500 g) was dissolved in 500 mL n-hexane and then slowly poured into the column. The oil was eluted with 2 L n-hexane. To delay the oxidation of oil during the purification process, the stripped oil was collected in a container covered with aluminum foil and wrapped with ice. After washing, n-hexane was removed at 37 • C using a vacuum rotary evaporator (Heidolph, Schwabach, Germany). Residual traces of the solvent were removed by flushing nitrogen. Then, 300 g of the separated oil was separately transferred to several 3 mL vials, protected with nitrogen, and maintained at −80 • C for subsequent experiments. A small amount of the sample was taken and mixed with n-hexane, and spotted and developed on a thin-layer chromatography plate soaked with n-hexane:diethyl ether:formic acid (84:16:0.04, v/v/v) to confirm the removal effect of secondary components in SSO.
Synthesis of 2-Monoacyl Glycerol (2-MAG)
Structured lipid was synthesized according to Wang et al. [19]. 0.9 g algae oil and 3 g absolute alcohol were weighed and added to a 50 mL beaker. 0.4 g Lipozyme RM IM was added and magnetically stirred (IKA, Stauffen, Germany) at 40 • C for 5 h under a rotating speed of 200 rpm. After the reaction, the reaction mixture was centrifuged (Thermo Fisher Scientific Inc., Waltham, MA, USA) to remove the lipase. n-Hexane (30 mL) and 0.8 mol/L KOH (10 mL) were added to a certain amount of the centrifuged samples, and the solution was oscillated violently for 2 min, followed by 5 min of stationary incubation. Then, the lower alcohol solution was further extracted by adding 15 mL n-hexane, which was oscillated violently for 2 min and left undisturbed. After the formation of the two layers, the ethanol phase containing 2-MAGs was collected. The organic solvent was removed by rotary evaporation (40 • C), and the obtained samples were weighed and stored in a refrigerator at −20 • C for later use. The content of 2-MAG was quantified by comparing the peak area of 2-Monoolein standards.
2.3.2. Synthesis of 1,3C-2D-TAG Structured Lipids 100 g MAG was weighed and 150 g caprylic acid, 25 g Lipozyme RM IM, 1000 mL n-hexane and 2 g 4A molecular sieve were added. The resulting mixture was magnetically stirred at 40 • C for 5 h under a rotating speed of 200 rpm. After the reaction was completed, the lipase was centrifuged at 5000 rpm for 10 min to obtain a final 1,3C-2D-TAG sample. To further purify the obtained 1,3C-2D-TAG structured lipids, the crude 1,3C-2D-TAG samples were loaded onto the chromatography column and eluted with n-hexane/diethyl ether (95/5, v/v) solution. After elution, the eluted fractions were analyzed using thin layer chromatography (TLC) and high performance liquid chromatography (HPLC). The purified 1,3C-2D-TAG structured lipids were collected according to the analysis results and stored at −20 • C for subsequent analysis and characterization.
Analysis of Lipid Composition Using TLC
The lipid components (MAG, DAGs, TAGs, fatty acid ethyl esters) were analyzed using TLC. TLC plate was activated at 105 • C for 1 h before analysis. 10 µL samples were dissolved in 200 µL of n-hexane/diethyl ether (84/16, v/v), and 1 µL of each diluted sample was placed on the activated TLC plate, which was then placed in an oven at 120 • C to remove volatile solvents. Then a mixture of n-hexane:diethylether:formic acid (84/16/0.04, v/v/v) was spread onto the TLC plate containing the samples, and the fractions corresponding to different lipid classes were scraped from the TLC plates for further analysis.
Storage Methods and Oxidation Conditions of the Sample
The prepared MAG, DAG, TAG (1, 2, 5 wt%, respectively) and α-tocopherol (0.2 wt%) were added to the stripped soybean oil in different proportions as the Figure 1 experimental scheme. All of the samples were thoroughly mixed on the vortex oscillator to achieve uniform distribution. Then, 3 g of each group of samples was added to 3 mL glass vials and placed in oven at 50 ± 1 • C to avoid photooxidation for 20 days. The control group without additive was prepared at the same time. The primary oxidation products (hydroperoxide) and secondary oxidation products (propanal) were determined using the non-back sampling method at the same interval of oxidation time during storage. chromatography column and eluted with n-hexane/diethyl ether (95/5, v/v) solution. After elution, the eluted fractions were analyzed using thin layer chromatography (TLC) and high performance liquid chromatography (HPLC). The purified 1,3C-2D-TAG structured lipids were collected according to the analysis results and stored at 20 C for subsequent analysis and characterization.
Analysis of Lipid Composition Using TLC
The lipid components (MAG, DAGs, TAGs, fatty acid ethyl esters) were analyzed using TLC. TLC plate was activated at 105 °C for 1 h before analysis. 10 μL samples were dissolved in 200 μL of n-hexane/diethyl ether (84/16, v/v), and 1 μL of each diluted sample was placed on the activated TLC plate, which was then placed in an oven at 120 °C to remove volatile solvents. Then a mixture of n-hexane:diethylether:formic acid (84/16/0.04, v/v/v) was spread onto the TLC plate containing the samples, and the fractions corresponding to different lipid classes were scraped from the TLC plates for further analysis.
Storage Methods and Oxidation Conditions of the Sample
The prepared MAG, DAG, TAG (1, 2, 5 wt%, respectively) and α-tocopherol (0.2 wt%) were added to the stripped soybean oil in different proportions as the Figure 1 experimental scheme. All of the samples were thoroughly mixed on the vortex oscillator to achieve uniform distribution. Then, 3 g of each group of samples was added to 3 mL glass vials and placed in oven at 50 ± 1 °C to avoid photooxidation for 20 days. The control group without additive was prepared at the same time. The primary oxidation products (hydroperoxide) and secondary oxidation products (propanal) were determined using the non-back sampling method at the same interval of oxidation time during storage.
Analysis of Structured Lipid Types Using HPLC
The MLM structured lipid product was subjected to HPLC analysis based on the method of More et al. [20] with modifications. Two microliters of the sample were injected into a HPLC (Inertsil ODS-2 column: 250 × 4.6 mm i.d., particle size 5 µm; Shimadzu, Kyoto, Japan) equipped with a UV detector (Shimadzu, Kyoto, Japan) set at 254 nm. The flow rate was maintained at 1 mL/min during the analysis. The mobile phase was acetonitrile: acetic acid (9/1, v/v). The composition of the mixture was determined based on the retention time and peak order, and the corresponding relative content was calculated from peak area.
Analysis of the Composition and Content of sn-2 Site FA Using GC-FID
The methyl esterification of fatty acid (FAME) was analyzed using the AOAC Official Method 996.01. The sample was saponified with 2 mL of 0.5 M NaOH-CH 3 OH at 60 • C for 30 min and then reacted with 14% boron trifluoride in a water bath at 65 • C for 5 min. After the reaction was completed, the FAME was extracted with about 2 mL n-hexane. FAMEs were characterized and quantified using GC-17A Shimadzu gas chromatograph equipped with AOC-5000 autosampler (Shimadzu, Kyoto, Japan) with 30 m × 0.32 mm column. The carrier gas flow rate of GC was 1.0 mL/min, and the split ratio was 100:0. The inlet and detector temperatures were fixed at 250 • C. The initial furnace temperature was held at 60 • C for 3 min, then increased to 175 • C at a rate of 5 • C/min and maintained for 15 min, and finally raised to 220 • C at a rate of 2 • C/min and held for 10 min. The FAs were qualitatively and quantitatively analyzed based on the peak time and relative peak area according to the FAME standards.
Determination of Peroxide Value (POV)
Lipid hydroperoxides were measured as the primary oxidation products using a method adapted from POV as a hydrogen peroxide content indicator used to reflect the oxidation stage according to the Shantha et al. [21] method with modification. Mixed solution containing isooctyl alcohol: 2-propanol (3:1, v/v) (1.5 mL) was added to 0.3 mL of sample and uniformly mixed in a vortex shaker (Benchmark Scientific, NJ, USA). The sample was centrifuged at 4000 r/min for 5 min, after which 0.2 mL of the upper organic phase was added to 2.8 mL of a mixed solution of methanol: 1-butanol (2:1, v/v), followed by the addition of 15 µL 3.94 M thiocyanate ammonium and 15 µL ferrous solution (obtained by mixing 0.132 M BaCl 2 and 0.144 M FeSO 4 ·7H 2 O) and reacting in the dark for 20 min. Then, appropriate amount of the sample was used to measure the absorbance value at a wavelength of 510 nm using a spectrophotometer (ThermoSpectronic, Waltham, MA, USA).
Determination of Propanal
The propanal concentration was determined according to the method of Shantha et al. [21]. In brief, 1 mL sample was added to a 10 mL glass vial and the aluminum cap of the bottle was tightened. The measurement was performed after heating at 55 • C for 15 min in a gas chromatograph autosampler (Shimadzu, Kyoto, Japan) heating bath. After the solid phase microextraction (DVB/Carboxen/PDMS) fiber needle of the autosampler penetrated the stopper and absorbed the volatiles for 1 min, the SPME was transferred to the inlet and incubated for 3 min, followed by injection into the 65 • C GC column and incubation for 10 min. The inlet temperature was 250 • C and the split ratio was 1:5. The volatiles were separated on a Supleco 30 m × 0.32 mm Equity DB-1 column of 1 µm film thickness. The carrier gas was helium at 1.5 mL/min. The furnace temperature was set at 45 • C and held at that temperature for 5 min; next, the temperature was raised from 45 to 250 • C at a rate of 15.0 • C/min and maintained for 1 min. The FID detector (Shimadzu, Kyoto, Japan) was set to 250 • C. The standard curve was prepared with known concentration of propanal, and the concentration of propanal released from the sample was determined from the peak area. The experimental scheme for oxidative stability of stripped soybean oil with structured lipids is summarized in Figure 1.
Statistical Analysis
All tests were repeated thrice with freshly prepared samples. Data calculation was carried out using Origin 8.5. All of the data presented as means ± standard deviations (n = 3). The one-way ANOVA and Duncan's multiple range tests were applied to determine significant differences between means (p < 0.05).
Purification of Soybean Oil Samples
The main components of commercial grade vegetable oils are glycerides, including monoglycerides, diacylglycerols, and triglycerides, which account for about 95% of the mass of commercial vegetable oils. Studies have shown that the concentrations of monoglycerides and diacylglycerols are much lower than those of triglycerides in animal fats and vegetable oils as the processing leads to partial hydrolysis of triglycerides to monoglycerides and diacylglycerols [18]. The remaining (5%) vegetable oil is mainly composed of unsaponifiable compounds (such as hydrocarbons, tocopherols, tocotrienols, phytosterols, chlorophyll, carotenoids, flavonoids, free fatty acids, polar polyphenols, and carbohydrates) and trace metal ions. These components are mainly derived from plant seed oil film or via hydrolysis during storage or application of pressure [22]. Although the amount is low, most of these components significantly affect the physical and chemical properties of oils, such as antioxidant (such as phenols) or pro-oxidation activities (such as free FA, metal ions, and chlorophyll) [23,24].
To reduce the effect of unsaponifiable oils on sample oxidation, the commercial vegetable oil was refined and stripped (removal of unsaponifiable matter). As shown in Table 1, the contents of stripped hydroperoxide and propanal were 2.22 mmol/kg and 9.83 µmol/kg, respectively, which were significantly lower than those in the commercial samples (p < 0.05). Khan et al. [25] showed that the stability of oil or water-in-oil emulsion prepared from stripped bean oil was significantly lower than that of the non-purified oil system, and the unsaponifiable matter in natural oil considerably affected the oxidation stability of oil. Therefore, the content of unsaponifiable matter in the stripped bean oils used in this study was low, and the oxidation interference effect of polar small molecules on the storage period of oils was also reduced, which ensured a more accurate and reasonable effect of different DHA sn-2 structured lipids on the oxidation of SSO. Table 1. Hydrogen peroxide and propanal content in oil samples before and after purification (25 • C).
Effect of Different Type of Structured Lipids on Oxidation Stability of SSO
Studies have shown that the concentration of diacylglycerols in vegetable oils is between 0.8 and 5.8% of the total oil content [26]. Therefore, to determine the effect of monoglyceride and diacylglycerol on the oxidative stability of SSO, 2 wt% monoglycerides with DHA (2D-MAG), diacylglycerols with caprylic acid (1,3C-DAG), triglyceride with caprylic acid at sn-1,3 and DHA at sn-2 position (1,3C-2D-TAG), and caprylic triglyceride (1,2,3C-TAG), respectively, were added into the SSO. Hydroperoxide and propanal were used as monitoring indicators to study the effects of different structured lipids on the oxidation of SSO under accelerated oxidation (50 • C).
Hydroperoxides are the primary initial oxidation products of FAs. POV are commonly used to evaluate the oxidation state of FAs [27]. Figure 2A showed that after storage at 50 • C for 2 days, the POV of the 1,3C-2D-TAG group and the SSO blank group increased significantly compared to those of other experimental groups (p < 0.05), which reached 289.65 and 258.5 mmol/kg on the seventh day, respectively. Unlike the SSO of the blank group, the POV of 2D-MAG and 1,3C-DAG group began to increase significantly after the third day (p < 0.05), indicating that 2D-MAG and 1,3C-DAG prolonged the oxidation induction period of SSO to certain extent. Ohno et al. [28] also reported that the auto-oxidation of DAG at 50 • C was slower than that of TAG, and the oxidation induction period was longer than that of TAG. DAG and MAG are the main trace components in natural oils. DAG harbors a free hydroxyl group in the carbon chain, while MAG has two free hydroxyl groups. The differences in the physical and chemical properties of structured lipids may be due to the molecular structure of DAG, MAG, and TAG. Laszlo et al. [29] believed that the acyl migration rate of 1,3-DAG was slower than that of 2-MAG, as the presence of two FA groups in DAG led to a large deformation of the ring intermediate, thereby increasing the transition state energy barrier. All medium-chain saturated FAs were present at the 1,2,3C-TAG acyl sites, and therefore the POV of 1,2,3C-TAG began to increase slowly after 5 days of storage, eventually reaching 58.28 mmol/kg (7 days). This indicated that the oxidation of oil is related to not only the number of acyl groups attached to the carbon chain of glycerol, but also to the type of acyl FA, which considerably affected the degree and speed of oxidation of the oil. Wang et al. [30] showed that the oxidation rate of triacylglycerol was higher than that of diacylglycerol and monoacylglycerol. But our hypothesis is that the oxidation rate of structured lipid was correlated with the type of fatty acid attached to glycerol. Although the content of unsaturated FAs in DAG was higher, its oxidation rate was still significantly lower than that of TAG in our work. One possible speculation is that the transfer reactions of intramolecular free radical are faster than that of the intermolecular free radical.
The structure of FA was completely destroyed, and secondary oxidation products such as aldehydes, ketones, acids, and alcohols were further formed when a large amount of hydroperoxide accumulated during lipid oxidation. Among them, the change in propanal content was considered one of the most important indicators for evaluating the secondary oxidation products of oils. Figure 2B showed the effect of 2D-MAG, 1,3C-DAG, 1,3C-2D-TAG, and 1,2,3C-TAG on the secondary oxidation products of SSO. After storage for 3 days at 50 • C, the content of propanal in 1,3C-2D-TAG increased significantly (p < 0.05), and the rate of increase was significantly higher than those of the other groups. The content of propanal in the SSO without any added substance (SSO blank group) was 69.52 µmol/kg on the fourth day. Although there was a significant increase from the previous day, the content of propanal in the SSO blank group was lower than that of 1,3C-2D-TAG. Compared to that on the sixth day, the propanal content on the 7th day of the SSO group increased by 79.6%, indicating that there was an oxidation index period and that secondary oxidation products were formed in the SSO blank group on the sixth day. Among all experimental groups, the oxidation process of the 1,2,3C-TAG group was relatively slow, and the propanal content was only 153.08 µmol/kg on the eighth day. The propanal content of 2D-MAG and 1,3C-DAG increased significantly on the seventh day (p < 0.05), indicating that the oxidation induction period of 2D-MAG and 1,3C-DAG was delayed by 2-3 days compared to that of the 1,3C-2D-TAG and SSO blank group. Figure 3A showed the effect of different concentrations of MAG on the POV of stripped soybean oil. The POV of the SSO control group began to increase slowly from the second day and reached the maximum value of 258.5 mmol/kg on the 7th day. Similar to SSO, 1% 2D-MAG had negligible effect on the POV of stripped soybean oil. However, the POV of the group with higher concentration (2%) of 2D-MAG was lower than that of SSO blank group. The group with the highest concentration (5%) of MAG showed significantly delayed oxidation of oil (p < 0.05), and the lag period formed by POV was extended to 5 days, which indicated that addition of more 2D-MAG (2-5 wt%) prolonged the oxidation induction period of oil to certain extent. Chen et al. [16] confirmed that addition of oleic acid monoglyceride (0.5%, 1.5%, and 2.5%) to stripped soybean oil inhibited the formation of SSO hydroperoxides to varying degrees. Our results of this study also revealed that addition of higher concentration of 2D-MAG was able to significantly slow down the SSO oxidation. Figure 3A showed the effect of different concentrations of MAG on the POV of stripped soybean oil. The POV of the SSO control group began to increase slowly from the second day and reached the maximum value of 258.5 mmol/kg on the 7th day. Similar to SSO, 1% 2D-MAG had negligible effect on the POV of stripped soybean oil. However, the POV of the group with higher concentration (2%) of 2D-MAG was lower than that of SSO blank group. The group with the highest concentration (5%) of MAG showed significantly delayed oxidation of oil (p < 0.05), and the lag period formed by POV was extended to 5 days, which indicated that addition of more 2D-MAG (2-5 wt%) prolonged the oxidation induction period of oil to certain extent. Chen et al. [16] confirmed that addition of oleic acid monoglyceride (0.5%, 1.5%, and 2.5%) to stripped soybean oil inhibited the formation of SSO hydroperoxides to varying degrees. Our results of this study also revealed that addition of higher concentration of 2D-MAG was able to significantly slow down the SSO oxidation. Studies have shown that there is an oxidation induction period of 1 day and 2 days for stripped soybean oil at 50 and 60 °C, respectively [31]. Figure 3B showed that the propanal level on the third day of each group was higher than that on the second day (p < 0.05). Among them, the propanal content (72.62 μmol/kg) on the third day of the SSO blank group was significantly higher than that of other groups. A large amount of propanal was formed in the SSO blank group, and it reached the highest value of 747.12 μmol/kg on the 8th day. Addition of different amounts of 2D-MAG inhibited the formation of SSO propanal, among which 5% 2D-MAG showed the most significant decrease in the amount of propanal, and the propanal produced by the oil at this ratio was the lowest on the 8th day (μmol/kg). Studies have also confirmed that 0.05-2.5% MAG had no effect on the production of non-stripped soybean oil POV and hexanal [18]. However, when MAG was added to SSO, the POV and hexanal tended to decrease. This indicated that in the presence of antioxidants (such as phenols), MAG was not the major factor affecting the oxidation process. This may be because the content of MAG in natural oils was small, and its ability to promote oxidation or inhibit oxidation cannot be completely realized. Thus, it can be deduced that the antioxidant capacity of 2D-MAG is more effective in the SSO system without trace amounts of polar substances, which protects SSO from oxidation.
Effect of Different Concentrations of 1,3C-DAG on the Oxidation Stability of SSO
Diglyceride was a natural ingredient in many edible oils, and its content in most cases is below 5%. As glycerol has three positions at which acyl groups can be introduced, diacylglycerols are usually in the form of sn-1,3. Figure 4A showed that the SSO hydroperoxide content began to increase significantly from the fourth day after adding 1% 1,3C-DAG, and then was similar to that of Studies have shown that there is an oxidation induction period of 1 day and 2 days for stripped soybean oil at 50 and 60 • C, respectively [31]. Figure 3B showed that the propanal level on the third day of each group was higher than that on the second day (p < 0.05). Among them, the propanal content (72.62 µmol/kg) on the third day of the SSO blank group was significantly higher than that of other groups. A large amount of propanal was formed in the SSO blank group, and it reached the highest value of 747.12 µmol/kg on the 8th day. Addition of different amounts of 2D-MAG inhibited the formation of SSO propanal, among which 5% 2D-MAG showed the most significant decrease in the amount of propanal, and the propanal produced by the oil at this ratio was the lowest on the 8th day (µmol/kg). Studies have also confirmed that 0.05-2.5% MAG had no effect on the production of non-stripped soybean oil POV and hexanal [18]. However, when MAG was added to SSO, the POV and hexanal tended to decrease. This indicated that in the presence of antioxidants (such as phenols), MAG was not the major factor affecting the oxidation process. This may be because the content of MAG in natural oils was small, and its ability to promote oxidation or inhibit oxidation cannot be completely realized. Thus, it can be deduced that the antioxidant capacity of 2D-MAG is more effective in the SSO system without trace amounts of polar substances, which protects SSO from oxidation.
Effect of Different Concentrations of 1,3C-DAG on the Oxidation Stability of SSO
Diglyceride was a natural ingredient in many edible oils, and its content in most cases is below 5%. As glycerol has three positions at which acyl groups can be introduced, diacylglycerols are usually in the form of sn-1,3. Figure 4A showed that the SSO hydroperoxide content began to increase significantly from the fourth day after adding 1% 1,3C-DAG, and then was similar to that of the group without addition of 1,3C-DAG (p > 0.05). The POV of the group with higher concentration of 1,3C-DAG (2 wt%) began to increase significantly on the fifth day and was 194.33 mmol/kg on the seventh day. Hydroperoxide accumulated slowly in the group with 5% added 1,3C-DAG, which increased significantly on the sixth day, and reached 173.07 mmol/kg on the seventh day (increase by 235.1%). Studies [32] have shown that low doses of diacylglycerols can reduce the surface tension and increase the diffusion of oxygen in oils; however, high doses of diacylglycerols accumulate at the interface of oils to form a barrier and prevent further dissolution of oxygen. In addition, Qi et al. [33] also showed that pure DAG prepared from soybean oil has better oxidation stability than stripped soybean oil. 1,3C-DAG oil had a 22-day oxidation induction period, while soybean oil showed only 11 days of oxidative induction. Based on this, we believe that a low content of 1,3C-DAG might not change the oxidation pathway and induction period of SSO, while higher concentrations of 1,3C-DAG significantly inhibited peroxidation. the group without addition of 1,3C-DAG (p > 0.05). The POV of the group with higher concentration of 1,3C-DAG (2 wt%) began to increase significantly on the fifth day and was 194.33 mmol/kg on the seventh day. Hydroperoxide accumulated slowly in the group with 5% added 1,3C-DAG, which increased significantly on the sixth day, and reached 173.07 mmol/kg on the seventh day (increase by 235.1%). Studies [32] have shown that low doses of diacylglycerols can reduce the surface tension and increase the diffusion of oxygen in oils; however, high doses of diacylglycerols accumulate at the interface of oils to form a barrier and prevent further dissolution of oxygen. In addition, Qi et al. [33] also showed that pure DAG prepared from soybean oil has better oxidation stability than stripped soybean oil. 1,3C-DAG oil had a 22-day oxidation induction period, while soybean oil showed only 11 days of oxidative induction. Based on this, we believe that a low content of 1,3C-DAG might not change the oxidation pathway and induction period of SSO, while higher concentrations of 1,3C-DAG significantly inhibited peroxidation. Figure 4B showed that as the concentration of 1,3C-DAG increases, the inhibiting effect of propanal in the SSO system becomes obvious. The propanal content after the addition of 1% of 1,3C-DAG began to increase significantly on the seventh day (p < 0.05), while those after the addition of higher concentrations of 1,3C-DAG did not. Chen et al. [16] showed that diacylglycerols (0.01-2.5%) prevented lipid oxidation in oil-in-water emulsions better than monoglycerides, and its Figure 4B showed that as the concentration of 1,3C-DAG increases, the inhibiting effect of propanal in the SSO system becomes obvious. The propanal content after the addition of 1% of 1,3C-DAG began to increase significantly on the seventh day (p < 0.05), while those after the addition of higher concentrations of 1,3C-DAG did not. Chen et al. [16] showed that diacylglycerols (0.01-2.5%) prevented lipid oxidation in oil-in-water emulsions better than monoglycerides, and its ability to inhibit the production of hexanal in oils was also stronger. The oxidation resistance of diacylglycerols may be their ability to crosslink and form a physical barrier, which reduced the interaction or contact surface between unsaturated FAs and oxygen.
Effect of Different Concentrations of 1,3C-2D-TAG on the Oxidation Stability of SSO
Raw soybean oil is highly susceptible to oxidation under accelerated conditions, owing to the presence of 7% α-linolenic acid and 50% linoleic acid. Figure 5A shows the effect of adding different amounts of 1,3C-2D-TAG on the oxidation of purified SSO. The POV began to increase on the third day with the addition of 1% 1,3C-2D-TAG. With increase in storage time, the change in POV content in the experimental group with 1% 1,3C-2D-TAG was basically identical to that in the SSO blank group (p > 0.05), indicating that addition of low doses of 1,3C-2D-TAG did not affect the oxidation process of SSO. However, the POV content of the group with high concentration of 1,3C-2D-TAG (5 wt%) was significantly higher than that of the SSO control group and the group with low concentration of 1,3C-2D-TAG (p < 0.05) from the third day, indicating that oil prepared with higher concentration of 1,3C-2D-TAG had lower oxidation stability. Rocha-Uribe et al. [34] demonstrated that the oxidative stability of lipids was related to CLA content and distribution of FA position, and the distribution of CLA at sn-2 sites was not conducive to the stability of the structured lipid. There are two main ways for degrading macromolecular FAs. One involves decarboxylation and decarbonylation, followed by C-C bond cleavage to produce hydrocarbon radicals or carbon ions; the other involves disruption of the hydrocarbons, followed by decarboxylation and decarbonylation to create short chain molecules. These two approaches compete with each other. Based on the DHA structure of the 1,3C-2D-TAG structured lipid at the sn-2 position, 1,3C-2D-TAG has partial polarity, and the probability of the exposure of unsaturated long chain of DHA in structured lipids is more. Therefore, the 1,3C-2D-TAG structured lipid is more likely to undergo decarboxylation and decarbonylation reactions after the bond cleavage of DHA hydrocarbons.
After initiation of lipid oxidation, the oxidation reaction turned into an autocatalytic process, which generated alkyl and hydroxyl radicals that continuously attacked and oxidized the unsaturated FAs. Figure 5B shows that the propanal content of the oil supplemented with 5% 1,3C-2D-TAG starts increasing rapidly on the fourth day, and reaches the maximum value of 860.2 µmol/kg on the eighth day. Compared to the SSO blank group, addition of low concentration of 1,3C-2D-TAG can also increase the formation amount of propanal to some extent, which indicated that the 1,3C-2D-TAG prepared in this study does not have commendable oxidation stability. Yang et al. [35] also confirmed that MLM-type structured lipids possessed low antioxidant capacity, which is possibly because the increase in transesterification during the preparation of structured lipids reduced the antioxidant capacity of raw lipids. In addition, the oxidation rate of free FAs in samples is higher than that of their esterification, which is also one of the reasons for the decrease in oxidation stability. Therefore, it is necessary to add appropriate antioxidants to protect the new structured lipids. Figure 6A showed that the effect of adding different amounts of 1,2,3C-TAG on oxidation inhibition of SSO vary considerably. Although the POV content of the sample with 1% 1,2,3C-TAG increased on the third day, the whole storage process had negligible effect on the change of SSO lipid (p > 0.05), except for the significant difference in POV content compared to the SSO blank group on the fifth day. With an increase in the amount of added 1,2,3C-TAG, the content of POV in the lipid began to increase slowly on the fifth day (p < 0.05), and the POV with 5% 1,2,3C-TAG was only 43.6 mmol/kg on the seventh day of storage. This indicated that the high concentration of 1,2,3C-TAG can delay the oxidation of the SSO, which may be related to the higher saturation of the medium chain caprylic acid in 1,2,3C-TAG lipids in this study. Figure 6A showed that the effect of adding different amounts of 1,2,3C-TAG on oxidation inhibition of SSO vary considerably. Although the POV content of the sample with 1% 1,2,3C-TAG increased on the third day, the whole storage process had negligible effect on the change of SSO lipid (p > 0.05), except for the significant difference in POV content compared to the SSO blank group on the fifth day. With an increase in the amount of added 1,2,3C-TAG, the content of POV in the lipid began to increase slowly on the fifth day (p < 0.05), and the POV with 5% 1,2,3C-TAG was only 43.6 mmol/kg on the seventh day of storage. This indicated that the high concentration of 1,2,3C-TAG can delay the oxidation of the SSO, which may be related to the higher saturation of the medium chain caprylic acid in 1,2,3C-TAG lipids in this study. Similar to the results of the POV content, the effect of different concentrations of 1,2,3C-TAG on propanal content during oxidation of the SSO system correlated with the amount of added 1,2,3C-TAG. As the amount of added 1,2,3C-TAG increased, the rate of increase in propanal content in lipids was significantly lower, which also indicated that addition of high concentrations of 1,2,3C-TAG was beneficial for delaying the oxidation induction time of SSO. Guillén et al. [36] observed that lipids with highly unsaturated FAs in triglycerides had the fastest oxidative degradation rate. Compared to that of the structured lipid 1,3C-2D-TAG, the oxidation effects of the same concentration of two kinds of TAG on the SSO lipid system was significantly different. When DHA was attached to the sn-2 acyl position, the oxidative stability in the triglyceride system decreased, indicating that the type of the FA at the sn-2 acyl position significantly affected the degree and speed of oxidation of the lipid. The order of oxidation rate of triglycerides may be caused by the faster occurrence of intramolecular free radical transfer reaction than intermolecular free radical transfer reaction. Therefore, it is speculated that triglycerides with more stable intramolecular structure has higher oxidation stability. Similar to the results of the POV content, the effect of different concentrations of 1,2,3C-TAG on propanal content during oxidation of the SSO system correlated with the amount of added 1,2,3C-TAG. As the amount of added 1,2,3C-TAG increased, the rate of increase in propanal content in lipids was significantly lower, which also indicated that addition of high concentrations of 1,2,3C-TAG was beneficial for delaying the oxidation induction time of SSO. Guillén et al. [36] observed that lipids with highly unsaturated FAs in triglycerides had the fastest oxidative degradation rate. Compared to that of the structured lipid 1,3C-2D-TAG, the oxidation effects of the same concentration of two kinds of TAG on the SSO lipid system was significantly different. When DHA was attached to the sn-2 acyl position, the oxidative stability in the triglyceride system decreased, indicating that the type of the FA at the sn-2 acyl position significantly affected the degree and speed of oxidation of the lipid. The order of oxidation rate of triglycerides may be caused by the faster occurrence of intramolecular free radical transfer reaction than intermolecular free radical transfer reaction. Therefore, it is speculated that triglycerides with more stable intramolecular structure has higher oxidation stability.
Effect of 2D-MAG on the Oxidation Stability of SSO with α-Tocopherol
As shown in Figure 7A, the oxidative lag period of SSO was shorter, and the hydrogen peroxide value on the 4th day increased significantly to 55.8 mmol/kg (p < 0.05). The oxidative lag period of lipids with added 2D-MAG was significantly delayed for 2 days, indicating that 2D-MAG inhibited the oxidation of SSO to certain extent. The oxidation of SSO with 0.2 wt% α-tocopherol was inhibited (p < 0.05), and the oxidative lag period reached 10 days. When α-tocopherol was exhausted, SSO started undergoing oxidation on the twelfth day, and quickly reached 134.64 mmol/kg on the fourteenth day. After adding 2 wt% 2D-MAG to SSO with α-tocopherol, the oxidation of SSO was significantly inhibited during 0-6 days of storage, while the hydrogen peroxide value of SSO increased rapidly in the subsequent storage process, reaching 179.45 mmol/kg. As shown in Figure 7B, the oxidation of SSO in the control group started on the fourth day and the content of propanal increased rapidly, indicating that the hydroperoxide had been further degraded into secondary volatile oxidation products during SSO storage. After the addition of α-tocopherol, the oxidation of SSO was not significantly accelerated until the tenth day, and thrice the amount of propanal was produced on the 12th day, indicating that α-tocopherol inhibited not only the increase of SSO hydroperoxydes, but also the increase in propanal content. As shown in Figure 7A, the oxidative lag period of SSO was shorter, and the hydrogen peroxide value on the 4th day increased significantly to 55.8 mmol/kg (p < 0.05). The oxidative lag period of lipids with added 2D-MAG was significantly delayed for 2 days, indicating that 2D-MAG inhibited the oxidation of SSO to certain extent. The oxidation of SSO with 0.2 wt% α-tocopherol was inhibited (p < 0.05), and the oxidative lag period reached 10 days. When α-tocopherol was exhausted, SSO started undergoing oxidation on the twelfth day, and quickly reached 134.64 mmol/kg on the fourteenth day. After adding 2 wt% 2D-MAG to SSO with α-tocopherol, the oxidation of SSO was significantly inhibited during 0-6 days of storage, while the hydrogen peroxide value of SSO increased rapidly in the subsequent storage process, reaching 179.45 mmol/kg. As shown in Figure 7B, the oxidation of SSO in the control group started on the fourth day and the content of propanal increased rapidly, indicating that the hydroperoxide had been further degraded into secondary volatile oxidation products during SSO storage. After the addition of α-tocopherol, the oxidation of SSO was not significantly accelerated until the tenth day, and thrice the amount of propanal was produced on the 12th day, indicating that α-tocopherol inhibited not only the increase of SSO hydroperoxydes, but also the increase in propanal content. It was noteworthy that compared to that of the SSO blank group without 2D-MAG, the addition of 2D-MAG to α-tocopherol-supplemented-SSO lipid shortened the oxidation induction period of It was noteworthy that compared to that of the SSO blank group without 2D-MAG, the addition of 2D-MAG to α-tocopherol-supplemented-SSO lipid shortened the oxidation induction period of the lipid by 4 days, indicating that the addition of 2D-MAG promoted SSO oxidation. However, the addition of 2 wt% 2D-MAG to the SSO can also inhibit oxidation to some extent. Chen et al. [16] added 10 µm α-tocopherol to the system consisting of SSO and medium chain triacylglycerols (25:75 wt%) and the lag period was 11 days. After adding MAG, the lag period of the system was reduced to 10, 9, and 7 days (0.5, 1.5, 2.5 wt%), respectively, which also indicated that MAG in bulk lipid can reduce the antioxidant capacity of α-tocopherol [37]. As the effect of 2D-MAG on the SSO with/without antioxidants was significantly different, 2D-MAG and α-tocopherol had no effect on the inhibition and synergistic enhancement of SSO oxidation. Conversely, the addition of 2D-MAG reduced the inhibitory effect of α-tocopherol on SSO oxidation. Lee et al. [38] showed that although the addition of quercetin or rutin had an antioxidant effect in stripped oil, they promoted oxidation in non-stripped soybean oil. In this study, a potential possibility was that 2D-MAG in SSO inhibited oxidation mainly by itself oxidizing to avoid SSO from being oxidized.
Effect of 1,3C-2D-TAG on the Oxidation Stability of SSO with α-Tocopherol
The oxidative stability of 1,3C-2D-TAG in non/added antioxidant SSO was compared. From Figure 8A, the oxidation induction period of 1,3C-2D-TAG in the SSO system without antioxidant was only 2 days, and then the POV content reached 279.33 mmol/kg in the sixth day, which was significantly higher than that without 1,3C-2D-TAG on the fourth day (161.29 mmol/kg; p < 0.05). This indicated that 1,3C-2D-TAG accelerated oxidation and reduced the stability of SSO. Koh et al. [33] showed that the oxidative stability of structured lipids prepared using any enzyme catalysis was significantly lower than that of raw oil, which was related to the changed ratio of modified structured lipids to original triglyceride, and even the cis/trans, conjugated and non-conjugated FA systems affected the oxidation process. When 1,3C-2D-TAG was added to SSO system containing 0.2% α-tocopherol, the lipid oxidation induction period was extended to 8 days, and the POV content rapidly increased to 232.46 mmol/kg (p < 0.05) on the tenth day. Compared to the SSO without 1,3C-2D-TAG, 1,3C-2D-TAG reduced the inhibitory effect of antioxidants. As 1,3C-2D-TAG is linked to the highly unsaturated FA DHA at the sn-2 acyl position, it only slightly inhibits the oxidation process of 1,3C-2D-TAG.
The 1,3C-2D-TAG started to produce considerable amounts of propanal on the fourth day, after which the amount of propanal increased rapidly ( Figure 8B). The amount of propanal produced by 1,3C-2D-TAG was significantly higher than the SSO group (p < 0.05). In addition, the oxidation induction period of 1,3C-2D-TAG in SSO with antioxidant was short (4 days), but the propanal content increased rapidly after 8 days. Compared to the antioxidant SSO without 1,3C-2D-TAG, the addition of 1,3C-2D-TAG led to the accumulation of more propanal, which suggested that 1,3C-2D-TAG reduced the anti-oxidation effect of antioxidants in the SSO system with similar changes in POV content.
Effect of 1,2,3C-TAG on the Oxidation Stability of SSO with α-Tocopherol
To further study the effect of the sn-2 acyl position and the FAs attached to the sn-2 acyl position on structured lipid oxidation, the oxidative stability of 1,2,3C-TAG was assessed in this study. Figure 9A showed that addition of 5% 1,2,3C-TAG delayed the oxidation lag period of the SSO to 4 days, coupled with a significant increase in POV content. The POV value on the eighth day reached 103.21 mmol/kg, which was significantly lower than that without 1,2,3C-TAG on the eighth day (244.33 mmol/kg (p < 0.05)). Compared to the effect of 1,2,3C-TAG on the oxidation of SSO, DHA at 1,3C-2D-TAG sn-2 position induced the oxidation of structured lipids, thereby further promoting the oxidation of SSO. Studies have shown that the consumption of endogenous antioxidants in natural lipid, and the changes in FA composition and FA distribution in triglyceride, which were the main reasons for the changes in structured lipid oxidation stability.
Effect of 1,2,3C-TAG on the Oxidation Stability of SSO with α-Tocopherol
To further study the effect of the sn-2 acyl position and the FAs attached to the sn-2 acyl position on structured lipid oxidation, the oxidative stability of 1,2,3C-TAG was assessed in this study. Figure 9A showed that addition of 5% 1,2,3C-TAG delayed the oxidation lag period of the SSO to 4 days, coupled with a significant increase in POV content. The POV value on the eighth day reached 103.21 mmol/kg, which was significantly lower than that without 1,2,3C-TAG on the eighth day (244.33 mmol/kg (p < 0.05)). Compared to the effect of 1,2,3C-TAG on the oxidation of SSO, DHA at 1,3C-2D-TAG sn-2 position induced the oxidation of structured lipids, thereby further promoting the oxidation of SSO. Studies have shown that the consumption of endogenous antioxidants in natural lipid, and the changes in FA composition and FA distribution in triglyceride, which were the main reasons for the changes in structured lipid oxidation stability. The effect of 1,2,3C-TAG on the content of propanal in the SSO is shown in Figure 9B. Addition of 2% 1,2,3C-TAG significantly affected the content of propanal in the SSO. On the eighth day of storage, the content of propanal in the SSO with 1,2,3C-TAG was 245.45 μmol/kg, which was significantly lower than that of the system without 1,2,3C-TAG (747.12 μmol/kg). Owing to the presence of saturated medium chain FAs linked to the 1,2,3C-TAG glycerol carbon chain, propanal was formed mainly from the oxidation of long-chain monounsaturated FAs in the SSO system [39]. In the presence of antioxidants, 1,2,3C-TAG showed no significant oxidation inhibition effect on SSO. Combined with the effect of 1,2,3C-TAG on the content of POV and propanal in the SSO with antioxidant, we concluded that 1,2,3C-TAG has obvious inhibitory effects on the oxidation of the pure SSO system, although this inhibitory effect was not significant in the presence of antioxidants. Addition of antioxidants improves the antioxidant stability of structured lipids; however, whether different antioxidants differentially affect structured lipids remains to be further studied.
Changes in the DHA Content in Different SSO during Accelerated Oxidation Period
To investigate the relationship between the DHA content at sn-2 position and the oxidation of SSO supplemented with structured lipid, the change in the DHA content at sn-2 position in 2D-MAG and 1,3C-2D-TAG during lipid oxidation was compared. Table 2 shows that there was no DHA either in the free FA or at the sn-2 site of the triglyceride, which provided a blank control basis for The effect of 1,2,3C-TAG on the content of propanal in the SSO is shown in Figure 9B. Addition of 2% 1,2,3C-TAG significantly affected the content of propanal in the SSO. On the eighth day of storage, the content of propanal in the SSO with 1,2,3C-TAG was 245.45 µmol/kg, which was significantly lower than that of the system without 1,2,3C-TAG (747.12 µmol/kg). Owing to the presence of saturated medium chain FAs linked to the 1,2,3C-TAG glycerol carbon chain, propanal was formed mainly from the oxidation of long-chain monounsaturated FAs in the SSO system [39]. In the presence of antioxidants, 1,2,3C-TAG showed no significant oxidation inhibition effect on SSO. Combined with the effect of 1,2,3C-TAG on the content of POV and propanal in the SSO with antioxidant, we concluded that 1,2,3C-TAG has obvious inhibitory effects on the oxidation of the pure SSO system, although this inhibitory effect was not significant in the presence of antioxidants. Addition of antioxidants improves the antioxidant stability of structured lipids; however, whether different antioxidants differentially affect structured lipids remains to be further studied.
Changes in the DHA Content in Different SSO during Accelerated Oxidation Period
To investigate the relationship between the DHA content at sn-2 position and the oxidation of SSO supplemented with structured lipid, the change in the DHA content at sn-2 position in 2D-MAG and 1,3C-2D-TAG during lipid oxidation was compared. Table 2 shows that there was no DHA either in the free FA or at the sn-2 site of the triglyceride, which provided a blank control basis for the structured lipid to study the DHA content change in the SSO system. After the addition of 20% 2D-MAG to the SSO, the total amount of DHA in the system and the DHA content at the sn-2 site of the structured lipid were 10.16% and 8.69% of FA content, respectively. With continuous oxidation, the DHA content at the sn-2 site in SSO and 2D-MAG began to decline. Among them, the DHA content at sn-2 site decreased to 2.97% on the second day, which was 59.9% lesser than that of the previous day (p < 0.05). After the fourth day of oxidation, no DHA was detected at the sn-2 site in 2D-MAG, indicating that the DHA at the sn-2 site was oxidized and its content decreased rapidly. Compared to the changes in the DHA content at the sn-2 site in 2D-MAG, the total DHA content in SSO decreased from 10.16% (Day 0) to 5.22% (Day 4), which indicated that during the oxidation process of 0-3 days, the DHA content at the sn-2 site in 2D-MAG decreased more rapidly than that in the SSO. One possible reason was that monoglycerides at the sn-2 site undergo a transesterification reaction during oxidation, and some sn-2 MAGs are converted to sn-1/sn-3 MAG. Laszlo et al. [29] reasoned that sn-2MAG had a higher proportion of transesterifications due to the position of hydroxyl groups. In addition, the rate of acyl migration in MAG was affected by temperature, solvent, acid, and base, and the length, unsaturation, and distribution direction of acyl group also affected the balance and distribution of MAGs in lipids. When antioxidants were added, the total amount of DHA in the SSO system and the DHA content at the sn-2 position decreased (p > 0.05). On the fourth day of oxidation, there was 5.22% total DHA and 3.28% sn-2 DHA in SSO, which was significantly higher than that of the experimental group without antioxidant. Therefore, the antioxidant is able to inhibit not only the oxidation of DHA in the SSO, but also the transesterification of sn-2 DHA to sn-1/sn-3 DHA in the structured lipid. | 11,634 | 2019-09-01T00:00:00.000 | [
"Chemistry",
"Agricultural and Food Sciences"
] |
Tree-aggregated predictive modeling of microbiome data
Modern high-throughput sequencing technologies provide low-cost microbiome survey data across all habitats of life at unprecedented scale. At the most granular level, the primary data consist of sparse counts of amplicon sequence variants or operational taxonomic units that are associated with taxonomic and phylogenetic group information. In this contribution, we leverage the hierarchical structure of amplicon data and propose a data-driven and scalable tree-guided aggregation framework to associate microbial subcompositions with response variables of interest. The excess number of zero or low count measurements at the read level forces traditional microbiome data analysis workflows to remove rare sequencing variants or group them by a fixed taxonomic rank, such as genus or phylum, or by phylogenetic similarity. By contrast, our framework, which we call trac (tree-aggregation of compositional data), learns data-adaptive taxon aggregation levels for predictive modeling, greatly reducing the need for user-defined aggregation in preprocessing while simultaneously integrating seamlessly into the compositional data analysis framework. We illustrate the versatility of our framework in the context of large-scale regression problems in human gut, soil, and marine microbial ecosystems. We posit that the inferred aggregation levels provide highly interpretable taxon groupings that can help microbiome researchers gain insights into the structure and functioning of the underlying ecosystem of interest.
Microbial communities populate all major environments on earth and significantly contribute to the total planetary biomass. Current estimates suggest that a typical human-associated microbiome consists of ∼ 10 13 bacteria 1 and that marine bacteria and protists contribute to as much as 70% of the total marine biomass 2 . Recent advances in modern targeted amplicon and metagenomic sequencing technologies provide a cost effective means to get a glimpse into the complexity of natural microbial communities, ranging from marine and soil to host-associated ecosystems [3][4][5] . However, relating these large-scale observational microbial sequencing surveys to the structure and functioning of microbial ecosystems and the environments they inhabit has remained a formidable scientific challenge.
Microbiome amplicon surveys typically comprise sparse read counts of marker gene sequences, such as 16S rRNA, 18S rRNA, or internal transcribed spacer (ITS) regions. At the most granular level, the data are summarized in count or relative abundance tables of operational taxonomic units (OTUs) at a prescribed sequence similarity level or denoised amplicon sequence variants (ASVs) 6 . The special nature of the marker genes enables taxonomic classification 7-10 and phylogenetic tree estimation 11 , thus allowing a natural hierarchical grouping of taxa. This grouping information plays an essential role in standard microbiome analysis workflows. For example, a typical amplicon data preprocessing step uses the grouping information for count aggregation where OTU or ASV counts are pooled together at a higher taxonomic rank (e.g., the genus level) or according to phylogenetic similarity [12][13][14][15][16] . This approach reduces the dimensionality of the data set and avoids dealing with the excess number of zero or low count measurements at the OTU or ASV level. In addition, rare sequence variants with incomplete taxonomic annotation are often simply removed from the sample.
This common practice of aggregating to a fixed taxonomic or phylogenetic level and then removing rare variants comes with several statistical and epistemological drawbacks. A major limitation of the fixed-level approach to aggregation is that it forces a tradeoff between, on the one hand, using low-level taxa that are too rare to be informative (requiring throwing out many of them) and, on the other hand, aggregating to taxa that are at such a high level in the tree that one has lost much of the granularity in the original data. Aggregation to a fixed level attempts to impose an unrealistic "one-size-fits-all" mentality onto a complex, highly diverse system with dynamics that likely vary appreciably across the range of species represented. A fundamental premise of www.nature.com/scientificreports/ this work is that the decision of how to aggregate should not be made globally across an entire microbiome data set a priori but rather be integrated into the particular statistical analysis being performed. Many factors, both biological and technical, contribute to the question of how one should aggregate: biological factors include the characteristics of the ecosystem under study and the nature of the scientific question; technical aspects include the abundance of different taxa, the available quality of the sequencing data-including sequencing technology, sample sequencing depth, and sample size-all of which may affect the ability to distinguish nearby taxa. Another important factor when considering the practice of aggregating counts is that standard amplicon counts only carry relative (or "compositional") information about the microbial abundances and thus require dedicated statistical treatment. When working with relative abundance data, the authors in [17][18][19] posit that counts should be combined with geometric averages rather than arithmetic averages. The common practice of performing arithmetic aggregation of read counts to some fixed level before switching over to the geometric-averagebased compositional data analysis workflow is unsatisfactory since the "optimal" level for fixed aggregation is likely data-dependent, and the mixed use of different averaging operations complicates interpretation of the results.
To address these concerns, we propose a flexible, data-adaptive approach to tree-based aggregation that fully integrates aggregation into a statistical predictive model rather than relegating aggregation to preprocessing. Given a user-defined taxon base level (by default, the OTU/ASV level), our method trac (tree-aggregation of compositional data) learns dataset-specific taxon aggregation levels that are optimized for predictive regression modeling, thus making user-defined aggregation obsolete. Using OTU/ASVs as base level, Fig. 1A illustrates the typical aggregation-to-genus level approach whereas Fig. 1B shows the prediction-dependent trac approach. The trac method is designed to mesh seamlessly with the compositional data analysis framework by combining log-contrast regression 20 with tree-guided regularization, recently put forward in 21 . Thanks to the convexity of the underlying penalized estimation problem, trac can deliver interpretable aggregated solutions to large-scale microbiome regression problems in a fast and reproducible manner.
We demonstrate the versatility of our framework by analyzing seven representative regression problems on five datasets covering human gut, soil, and marine microbial ecosystems. Figure 1C summarizes the seven scenarios in terms of size of the microbial datasets and the average number of taxonomic aggregation levels selected by trac-inferred in the respective regression tasks. For instance, for the prediction of sCD14 concentrations (an immune marker in HIV patients) from gut microbiome data, trac selects, on average (over ten random www.nature.com/scientificreports/ training/test experiments), more taxa at the family level than any other taxonomic level, while it selects no taxa at the class or OTU level. By contrast, for the prediction of pH in the Central Park Soil data, class level taxa are selected more on average than any other level. This highlights the considerable departure from a typical fixedlevel aggregation when prediction is the goal. Furthermore, the variability across the seven scenarios suggests that different amounts of aggregation may be warranted in different data sets. Our trac framework complements other statistical approaches that make use of the available taxonomic or phylogenetic structure in microbial data analysis. For example, 22 uses phylogenetic information in the popular unifrac metric to measure distances between microbial compositions. The authors in [23][24][25][26] combine tree information with the idea of "balances" from compositional data analysis 18 to perform phylogenetically-guided factorization of microbiome data. Others have included the tree structure in linear mixed models 27,28 , use phylogenetic-tree-based regression for detecting evolutionary shifts in trait evolution 29 , and integrate tree-information into regression models for microbiome data 30,31 .
Along with our novel statistical formulation, we offer an easy-to-use and highly scalable software framework for simultaneous taxon aggregation and regression, available in the R package trac at https:// github. com/ jacob bien/ trac. The R package trac also includes a fast solver for standard sparse log-contrast regression 15 to facilitate comparative analyses and a comprehensive documentation and workflow vignette. All data and scripts to fully reproduce the results in this manuscript are available on Zenodo at https:// doi. org/ 10. 5281/ zenodo. 47345 27.
We next introduce trac's mathematical formulation and discuss the key statistical and computational components of the framework. We also give an overview of the microbial data set collection and the comparative benchmark scenarios. To give a succinct summary of the key aspects of trac modeling on microbiome data, we will present and discuss three of the seven regression scenarios in detail. The other scenarios are available in the Supplementary Material. We conclude the study by highlighting key observations and provide recommendations and viable extensions of the trac framework.
Materials and methods
Modeling strategy. Let y ∈ R n be n observations of a variable we wish to predict and let X ∈ R n×p + be a matrix with X ij giving the number of (amplicon) reads assigned to taxon j in sample i. The total number of reads j X ij in sample i is a reflection of the sequencing process and therefore should not be interpreted as providing meaningful information about the biological sample itself. This observation has motivated the adoption of compositional data methods, which ensure that analyses depend only on relative abundances. Following the foundational work in 20 , one appropriate model for regression with relative abundance data is the log-contrast model where the outcome of interest is modeled as linear combinations of log-ratios (i.e., log-contrasts) of relative abundance features. For high-dimensional microbiome data, the authors in 15 propose solving an ℓ 1 -penalized regression estimator that includes a zero-sum constraint on the coefficients, the so-called sparse log-contrast model. Writing log(X) for the matrix with ijth entry log(X ij ) , their estimator is of the form Here, L(r) = (2n) −1 �r� 2 is the squared error loss and P(β) = �β� 1 is the ℓ 1 penalty 32 . The zero-sum constraint ensures that this model is equivalent to a log-contrast model 33 and invariant to sample-specific scaling. To understand the intuition behind the sparse log-contrast model, imagine that β j and β k are the only two nonzero coefficients. In such a case, the zero-sum constraint implies that predictions will be based on only the log-ratio of these two taxa. This can be seen by noting that β j = −β k , and so our model's prediction for observation i would be given by the following: Thus, using a log has the effect of turning differences into ratios. In addition, the zero-sum constraint provides invariance to sample-specific scaling: Replacing X by DX , where D is an arbitrary diagonal matrix, leaves Eq. (1) unchanged: The choice of the ℓ 1 penalty was motivated in 15 by the high dimensionality of microbiome data and the desire for parsimonious predictive models. However, such a penalty is not well-suited to situations in which large numbers of features are highly rare 21 , a well-known feature of amplicon data. A common remedy, also adopted in 15 , is to aggregate taxa at the base level, e.g., OTUs or ASVs, to the genus level and then to screen out all but the most abundant genera. Figure 1A depicts this standard practice: taxonomic (or phylogenetic) information in the form of a tree T is used to aggregate data, usually in an arithmetic manner (i.e. by summing), to a fixed level of the tree.
Our goal is to make aggregation more flexible (as illustrated in Fig. 1B), to allow the prediction task to inform the decision of how to aggregate, and to do so in a manner that is consistent with the log-contrast framework introduced above. A key insight is that aggregating features can be equivalently expressed as setting elements of β equal to each other. For example, suppose we partition the p base level taxa into K groups G 1 , . . . , G K and demand that β be constant within each group. Doing so yields K aggregated features. If all of the β j in group G k are equal to some common value γ k , then www.nature.com/scientificreports/ Thus, we are left with a linear model with K aggregated features, each being proportional to the log of the geometric mean of the base level taxa counts.
Associating the elements of β with the leaves of T , the above insight tells us that if our estimate of β is constant within subtrees of T , then that corresponds to a regression model with tree-aggregated features. In particular, each subtree with constant β-values will correspond to a feature, which is the log of the geometric mean of the counts within that subtree. The trac estimator uses a convex, tree-based penalty P T (β) for the penalty in Eq. (1) that is specially designed to promote β to have this structure that is based on subtrees of T . The mathematical form of P T (β) is given in Supplementary Material B. There, we show that the trac estimator reduces to solving the optimization problem: where geom(X; T ) ∈ R n×(|T |−1) is a matrix where each column corresponds to a non-root node of T and consists of the geometric mean of all base level taxa counts within the subtree rooted at u. Comparing this form of the trac optimization problem to Eq. (1) reveals an alternate perspective: trac can be interpreted as being like a sparse log-contrast model but instead of the features corresponding to base level taxa, they correspond to the geometric means of non-root taxa in T (i.e., X is replaced by geom(X; T ) ). This also facilitates model interpretability since we can directly combine positive and negative predictors into pairs of log-ratio predictors. For example, if taxa α u > 0 and α v < 0 are the only nonzero coefficients, then our predictions would be based on The particular choice of penalty is a weighted ℓ 1 -norm. While the trac package allows the user to specify general choices of weights w u > 0 , a convenient and interpretable strategy is to set weights to be an inverse power of the number of leaves in the subtree rooted at u, w u = |L u | −a . The scalar parameter a ∈ R controls the overall aggregation strength, with a = 1 being the default setting in trac. If the user decreases a, trac favors aggregations at a lower level of the tree. For a sufficiently negative, trac admits solutions equivalent to a sparse log-contrast model without aggregation since only leaves (with |L u | = 1 ) will remain unaffected by the weight scaling. The regularization parameter , on the other hand, is a positive number determining the overall tradeoff between prediction error on the training data and how much aggregation should occur. By varying , we can trace out an entire solution path α( ) , from highly sparse solutions (large ) to more dense solutions involving many taxa (small ). This "aggregation path" can itself be a useful exploratory tool in that it provides an ordering of the taxa as they enter the model.
Computation, model selection, and prediction.
Using trac in practice requires the efficient and accurate numerical solution of the convex optimization problem, specified in Eq. (2), across the full aggregation path. We experimented with several numerical schemes and found the path algorithm of 34 particularly wellsuited for this task. The trac R package internally uses the path algorithm implementation from the c-lasso Python package 35 , efficiently solving even high-dimensional trac problems. The trac package also provides a fast implementation of sparse log-contrast regression 15 for model comparison. The R package reticulate 36 is instrumental in connecting trac with the underlying Python library. The R packages phyloseq 37 , ggplot2 38 , ape 39 , igraph 40 , and ggtree 41 are used for operations on tree structures and visualization.
To find a suitable aggregation level along the solution path, we use cross validation (CV) with mean squared error to select the regularization parameter ∈ [ min , max ] for all the results presented in this paper. In particular, we perform 5-fold CV with the "one-standard-error rule" (1SE) 42 , which identifies the largest whose CV error is within one standard error of the minimum CV error. This heuristic purposely favors models that involve fewer taxa and are therefore easier to interpret. (We also use the 1SE rule to select for the sparse log-contrast model.) The parameter a is a user-defined control parameter and not subject to a model selection criterion. Having solved the trac optimization problem and chosen a value of the tuning parameter ( ˆ chosen ), we can predict the response value at a new sample. Given a new vector of abundances x ∈ R p + , we predict the response to be Due to trac's sparsity penalty, in general only a small number of coefficients will be non-zero, and thus the predictions will depend on only a small number of taxas' geometric means.
Data collection.
We assembled a collection of five publicly available and previously analyzed datasets, spanning human gut, soil, and marine ecosystems (see also Data column in Fig. 1C). All datasets, except for Tara, consist of 16S rRNA amplicon data of Bacteria and Archaea in the form of OTU count tables, taxonomic classifications, and measured covariates, as provided in the original publications. For ease of interpretability, we leverage the taxonomic tree information rather than phylogeny in our aggregation framework. To investigate potential human host-microbiome interactions, we re-analyze two human gut datasets, one cohort of HIV patients (Gut 43 , comprising p = 539 OTUs and n = 152 samples, and the other a subset of the American Gut Project data (Gut (AGP)) 5 , provided in 44 , comprising p = 1387 OTUs present in at least 10% of the n = 6266 samples. To study niche partitioning in terrestrial ecosystems, we use the Central Park soil dataset 45 , as provided by 23 , which consists of p = 3379 OTUs and n = 580 samples with a wide range of soil property measurements. For marine microbial ecosystems, we consider a sample collection from the Fram Strait in the North Atlantic 46 , available at https:// github. com/ edfad eev/ Bact-comm-PS85. The data set consists of n = 26 samples for p = 3320 OTUs in the particle-associated size class, and n = 25 samples for p = 4510 OTUs in the free-living size class. The second marine dataset is the Tara global surface ocean water sample collection 3 , available at http:// oceanmicro biome. embl. de/ compa nion. html, which comprises metagenome-derived OTUs (mOTUs). In Tara, each of the p = 8916 mOTUs considered here is present in at least 10% of the n = 136 samples. All data and analysis scripts are available in fully reproducible R workflows at https:// github. com/ jacob bien/ trac-repro ducib le. Since trac can operate on any taxon base level, we provide all data sets both in the form of the original (m)OTU base level as well as in arithmetically aggregated form on higher-order ranks, i.e., species, genus, family, order, class, and phylum. This facilitates straightforward method comparison across different base level aggregations.
Method comparison and model quality assessment.
To provide a comprehensive model performance evaluation and to highlight the flexibility of the trac modeling framework, we consider the following benchmark scenarios. Firstly, we consider three different regression models. We choose the sparse log-contrast regression model 15 as the standard baseline of performing regression on compositional data and can be considered as a limiting case of trac. In addition, we consider trac with two different aggregation parameters a. The setting a = 1 is referred to as standard trac. The setting a = 1/2 is referred to as weighted trac and tends to favor aggregations closer to the leaf level. Secondly, to assess the influence of arithmetic aggregation to a fixed level, e.g., the genus level, we compare the performance of all regression models for three different input base levels: OTU, genus, and family level.
To assess how well a log-contrast or trac model generalizes to "unseen" data, we randomly select 2/3 of the samples in each of the considered datasets for model training and selection. On the remaining 1/3 of the samples, we compute out-of-sample test mean squared error as well as the Pearson correlation between model predictions and actual measurements on the test set. While the out-of-sample test error serves as a key quantity to assess model generalizability, we also record overall model sparsity, measured in terms of number of aggregations (or taxa for sparse log-contrast models) in the trained model. Model sparsity serves to measure how "interpretable" a model is. Finally, we repeat all analysis on ten random training/test splits of the data to measure average test error and model sparsity. To ease interpretability, we analyze the trained models derived from split 1 in greater detail throughout the next section and detail the biological significance of the derived regression models.
Results and discussion
We next highlight key results of the trac framework for three of the seven regression scenarios described above on three different microbiome datasets. The first scenario considers the prediction of an immune marker (soluble sCD14) in HIV patients from microbiome data. In this scenario, we detail the behavior of a typical trac aggregation path and the model selection process. Furthermore, we compare the performance of trac models at different taxon base levels (OTU, genus, and family level) and aggregation weights ( a ∈ {1/2, 1} ) with standard sparse log-contrast models and analyze the resulting taxa aggregations. In the second scenario, we apply trac to predict pH concentrations in Central Park soil from microbial abundances and compare the resulting aggregations to known associations of pH and microbial taxa. The last scenario considers salinity prediction in the global ocean from Tara mOTU data. Further trac prediction scenarios are available in theSupplementary Material, including Body Mass Index (BMI) predictions on the American Gut Project Data, soil moisture prediction in Central Park soil, and primary productivity prediction from marine microbes in two different size fractions in the North Atlantic Fram Strait.
Immune marker sCD14 prediction in HIV patients. Infection with HIV is often paired with additional acute or chronic inflammation events in the epithelial barrier, leading to disruption of intestinal function and the microbiome. The interplay between HIV infection and the gut microbiome has been posited to be a "two-way street" 47 : HIV-associated mucosal pathogenesis potentially leads to perturbation of the gut microbiome and, in turn, altered microbial compositions could result in ongoing disruption in intestinal homeostasis as well as secondary HIV-associated immune activation and inflammation.
Here, we investigate one aspect of this complex relationship by learning predictive models of immune markers from gut amplicon sequences. While 48 were among the first to provide evidence that gut microbial diversity is a predictor of HIV immune status (as measured by CD4+ cell counts), we consider soluble CD14 (sCD14) measurements in HIV patients as the variable to predict and learn an interpretable regression model from gut microbial amplicon data. sCD14 is a marker of microbial translocation and has been shown to be an independent predictor of mortality in HIV infection 49 .
Following 43 , we analyze an HIV cohort of n = 152 patients where sCD14 levels (in pg/ml units) and fecal 16S rRNA amplicon data were measured. Using as base level all available p = 539 bacterial and archaeal OTUs, we first illustrate the typical trac prediction and model selection outputs with default weight parameter a = 1 on the first (of overall ten) training/test splits in Fig. 2. In Fig. 2A, we visualize the solution of the α coefficients associated with each aggregation along the regularization path. The vertical lines indicate the aggregations that were selected via cross-validation (CV) with the minimum mean squared error (CV best, dotted line) and onestandard-error rule (1SE rule, dashed line) (see Fig. 2B). On the test data, we highlight the relationship between test prediction performance of the trac models versus the number of inferred aggregations (Fig. 2D) (Fig. 2E): the kingdom Bacteria, phylum Actinobacteria and the family Lachnospiraceae are negatively associated, and the family Ruminococcaceae and the genus Bacteroides are positively associated with sCD14 counts, thus resulting in a trac model with three log-contrasts. From a biological perspective, this trac analysis suggests a strong role of the Ruminococcaceae to Lachnospiraceae family ratio and, to a lesser extent, the Ruminococcaceae to Actinobacteria ratio in predicting mucosal disruption (as measured by sCD14). This follows from observing the large positive α coefficient associated with Ruminococcaceae and the large negative α coefficients associated with Lachnospiraceae and Actinobacteria (and recalling the interpretation of the trac output in terms of log-ratios). The protective or disruptive roles of Ruminococci or Lachnospiraceae in HIV patients is typically considered to be highly species-specific. Moreover, few consistent microbial patterns are known that generalize across studies 50 . For instance, 51 report high variability and diverging patterns of the differential abundances of individual OTUs belonging to the Ruminococcaceae and Lachnospiraceae family in HIV-negative and HIV-positive participants. Our model posits that, on the family level, consistent effects of these two families are detectable in amplicon data. This also suggests that, with the right aggregation level, a re-analysis of recent HIV-related microbiome data may, indeed, reveal reproducible patterns of different taxon groups in HIV infection. www.nature.com/scientificreports/ To quantify the effect of taxon base level and aggregation weight scaling a, we re-analyze the data at OTU, genus, and family base level and compare trac models to sparse log-contrast models at the respective base level. The latter approach thus reflects the default mode of analysis, proposed in 15 , where sparse log-contrast modeling on fixed genus aggregations was performed. Figure 3 visualizes the estimated trac aggregations ( a ∈ {1, 1/2} ) and sparse taxa on the taxonomic tree of the sCD14 data. Figure 3A,B show the estimated models with OTUs as taxon base level, Fig. 3C,D with family base level. Figure 3A highlights the previously discussed five aggregations from Fig. 2E (Bacteria, Ruminococcaceae, Lachnospiraceae, Actinobacteria, and Bacteroides), found with standard trac (a = 1 ), by coloring the respective branches of the corresponding full taxonomic tree. We observe that the selected OTUs of the sparse log-contrast model (highlighted as black dots) cover each of the trac aggregations, including two OTUs in the phylum Actinobacteria, two OTUs in the family Ruminococcaceae, and one OTU in Lachnospiraceae family (see Suppl. Table 7 for the selected coefficients). Figure 3B highlights how weighted trac with a = 1/2 results in predictive models that can represent a sort of compromise between both standard trac and sparse log-contrast components. For instance, weighted trac still comprises the Ruminococcaceae family, the Actinobacteria phylum, and the Bacteroides genus but also shares four OTUs with the sparse log-contrast model. This exemplifies the flexibility of the trac framework in fine-tuning predictive models to the "right" level of aggregation. We observe a similar but less pronounced effect of the weighting when using aggregated family counts as taxon base level (Fig. 3C,D). The trac models comprise three and five aggregations, respectively, with the Actinobacteria phylum common to both. The sparse log-contrast model comprises six families, three of which are covered by the weighted trac model (two families in the Actinobacteria phylum and the Enterobacteriaceae family). www.nature.com/scientificreports/ To compare the different statistical models in terms of interpretability and prediction quality, we report the sparsity level and the out-of-sample prediction errors, averaged over ten different training/test splits, in Table 1. We observe that for the sCD14 data set, standard trac with OTU base levels delivers the sparsest (on average, seven aggregations) and most predictive solution (average test error 6.3e+06), followed by standard trac on the family level (average test error 6.5e+06). The sparse log-contrast model with genus base level has considerably reduced prediction capability (average test error 7.1e+06). On this data set, weighted trac ( a = 1/2 ) models show the expected intermediate properties between sparse log-contrast and standard trac solutions.
Predicting central park soil pH concentration from microbiome data. We next perform trac prediction tasks on environmental rather than host-associated microbiome data. We first consider soil microbial compositions since they are known to vary considerably across spatial scales and are shaped by myriads of biotic and abiotic factors. Using univariate regression models, the authors in 52 found that soil habitat properties, in particular pH and soil moisture deficit (SMD), can predict overall microbial "phylotype" diversity. For instance, using n = 88 soil samples from North and South America, the authors in 53 showed that soil pH concentrations are strongly associated with amplicon sequence compositions, as measured by pairwise unifrac distances. Moreover, they found that soil pH correlated positively with the relative abundances of Actinobacteria and Bacteroidetes phyla, negatively with Acidobacteria, and not at all with Beta/Gammaproteobacteria ratios.
Here, we use trac on the Central Park soil data collection comprising n = 580 samples and p = 3379 bacterial and archaeal OTUs 23,45 to provide a refined analysis of the relationship between soil microbiome and habitat properties. Rather than looking at the univariate correlative pattern between soil properties and phyla, we build multivariate models that take soil pH as the response variable of interest and optimize taxa aggregations using trac and sparse log-contrast models. The predictive analysis for soil moisture is relegated to the Supplementary Materials.
For pH prediction, standard trac gives an interpretable model with six aggregated taxonomic groups (see Fig. 4A): the two phyla Bacteroidetes and Verrucomicrobia and the class Acidobacteria-6 were positively associated, whereas the order Acidobacteriales, the class Gammaproteobacteria, and the overall kingdom of Bacteria (compared to Archaea) were negatively associated with pH (see bottom table in Fig. 4). We can thus associate a log-contrast model with three log-ratios of aggregated taxonomic groups with soil pH in Central Park. The overall Pearson correlation between the trac predictive model and the training data was 0.68. On the test data, the model still maintained a high correlation of 0.65. With the standard caveat that regression coefficients do not have the same interpretation (or even necessarily have the same sign) as their univariate counterparts, our model also supports a positive relationship between the Bacteroidetes phylum and pH and gives refined insights into the role of the Acidobacteria phylum. The model posits that the class Acidobacteria-6 is positively related and the order Acidobacteriales (in the Acidobacteriia class) is negatively related with pH. The authors in 23 observed similar groupings in their phylofactorization of the Central Park soil data. There, the classes Acidobacteria-6 and Acidobacteriia belonged to different "binned phylogenetic units" whose relative abundances increased and decreased along the pH gradient, respectively. Finally, the phylum Verrucomicrobia and the class Gammaproteobacteria, included in our model, have been reported to be highly affected by pH with several species of Gammaproteobacteria particularly abundant in low pH soil 54 .
In contrast to the sCD14 data analysis, weighted trac ( a = 1/2 ) delivers a considerably more fine-grained model with 23 aggregations, including 13 on the OTU level. While the Acidobacteria-6 class is still selected as a whole, weighted trac picks specific OTUs and families in the Gammaproteobacteria class. Similar behavior is observed for the Acidobacteriales order and the Bacteroidetes phylum. Moreover, novel orders, families, genera, and OTUs from the Bacteria kingdom are selected. Four OTUs are shared with the sparse log-contrast model which selects 21 OTUs overall.
To compare the models in terms of interpretability and prediction quality, we report in Table 2 average out-of-sample prediction errors and sparsity levels at three different base levels using ten different training/ test splits. We observe that for the Central Park soil data set, standard trac with OTU base levels delivers the sparsest solutions (on average, ten aggregations), followed by weighted trac on the family level (on average, 15 aggregations). The sparse log-contrast models deliver the densest models (26-33, on average). All models are comparable in terms of out-of-sample test error (0.38-0.40).
Global predictive model of ocean salinity from Tara data. Integrative marine data collection efforts such as Tara Oceans 55 or the Simons CMAP (https:// simon scmap. com) provide the means to investigate ocean ecosystems on a global scale. Using Tara's environmental and microbial survey of ocean surface water 3 , we next illustrate how trac can be used to globally connect environmental covariates and marine microbiome data. As Table 1. Average out-of-sample test errors (rounded average model sparsity in parenthesis) for trac (a ∈ {1, 1/2} ) and sparse log-contrast models, respectively. Each row considers a different base level (OTU, genus, and family). Each number is averaged over ten different training/test splits of the sCD14 data. www.nature.com/scientificreports/ an example, we learn global predictive models of ocean salinity from n = 136 samples and p = 8916 miTAG OTUs 56 . Even though salinity is thought to be an important environmental factor in marine microbial ecosystems, existing studies have investigated the connection between the microbiome and salinity gradients mainly on a local marine scale, in particular estuaries. Standard trac ( a = 1 ) identifies four taxonomic aggregations (see Fig. 5A), the kingdom Bacteria and the phylum Bacteroidetes being negatively associated and the class Alphaproteobacteria being strongly positively and Gammaproteobacteria being moderately positively associated with marine salinity.
Consistent with this trac model, a marked increase of Alphaproteobacteria with increasing salinity was observed in several estuary studies 57,58 . In a global marine microbiome meta-analysis 59 , Spearman rank correlations between relative abundances of microbial clades and several physicochemical water properties, including salinity, were reported, showing four out of five orders in the Bacteroidetes phylum to be negatively correlated with salinity. However, three out of four orders belonging to Gammaproteobacteria were negatively correlated with salinity, suggesting that the standard trac model does not universally agree with standard univariate assessments. However, as shown in Fig. 5B, weighted trac (a = 1/2 ) reveals a more fine-grained taxon aggregation, selecting the Halomonadaceae family and the Marinobacter genus in the phylum Gammaproteobacteria www.nature.com/scientificreports/ with negative α coefficients and a Gammaproteobacteria OTU (OTU 520, order E01-9C-26 marine group) with positive α coefficients, respectively (see also Supplementary Table 23). Likewise, out of the nine OTUs selected by the sparse log-contrast model (black dots in Fig. 5A,B), four out of six selected Gammaproteobacteria OTUs have negative coefficients (including OTU 520), and two OTUs have positive coefficients. In terms of model performance, the standard trac model shows good global predictive capabilities with an out-of-sample test error of 1.99 (on training/test split 1). We observe, however, that high salinity outliers located in the Red Sea (Coastal Biome) and the Mediterranean Sea (Westerlies Biome) and a low salinity outlier (far eastern Pacific fresh pool south of Panama) are not well captured by the model (see Supplementary Figure 5 for a scatter plot of measured vs. predicted salinity). Weighted trac (a = 1/2 ) and the sparse log-contrast models outperform standard trac on the salinity prediction task with an out-of-sample test error (on split 1) of 1.94 and 1.52, respectively.
This boost in prediction quality is further confirmed by the average out-of-sample prediction errors across all ten training/test splits and three base levels (see Table 3). Sparse log-contrast models on the OTU and Genus base level perform best (average test error 1.3 and 1.4, respectively), followed by weighted trac on Genus level (1.5). However, standard trac models are considerably sparser (six to seven aggregations) compared to Weighted trac (a = 1/2 ) selects ten aggregations across all taxonomic ranks, including a single OTU (OTU520). This OTU is also selected by the sparse log-contrast model which comprises nine OTUs in total (black dots) (see Suppl. Table 18 www.nature.com/scientificreports/ log-contrast models (13-24 taxa). Weighted trac models represent a good trade-off between predictability and interpretability, selecting ten to fourteen taxa, on average.
Conclusions
Finding predictive and interpretable relationships between microbial amplicon sequencing data and ecological, environmental, or host-associated covariates of interest is a cornerstone of exploratory data analysis in microbial biogeography and ecology. To this end, we have introduced trac, a scalable tree-aggregation regression framework for compositional amplicon data. The framework leverages the hierarchical nature of microbial sequencing data to learn parsimonious log-ratios of microbial compositions along the taxonomic or phylogenetic tree that best predict continuous environmental or host-associated response variables. The trac method is applicable to any user-defined taxon base level as input, e.g., ASV/OTU, genus, or family level, and includes a scalar tuning parameter a that allows control of the overall aggregation granularity. As shown above, this allows seamless testing of a continuum of models to a data set of interest, with prior approaches to sparse log-contrast modeling modeling as special limit cases 15,43,60 . The framework, available in the R package trac and Python 35 , shares similarities with ideas from tree-guided, balance modeling of compositional data 18,23,24 , albeit with a stronger focus on finding predictive relationships and emphasis on fast computation thanks to the convexity of the formulation and the underlying efficient path algorithm. Our comprehensive benchmarks and comparative analysis on host-associated and environmental microbiome data revealed several notable observations. Firstly, across almost all tested taxon base levels and methods, standard trac (a = 1 ) resulted in the most parsimonious models and revealed data-specific taxon aggregations comprising all taxonomic orders. This facilitated straightforward model interpretability despite the highdimensionality of the data. For instance, on the sCD14 data, the standard trac model with OTU base level asserted a particularly strong predictive role of the Ruminococcaceae/Lachnospiraceae family ratio for sCD14, thus generating a testable biological hypothesis. Likewise, trac analysis on environmental microbiomes in soil and marine habitats consistently provided parsimonious taxonomic aggregations for predicting covariates of interest. For instance, Alpha-and Gammaproteobacteria/Bacteroidetes ratios well-aligned with sea surface water salinity on a global scale, reminiscent of the ubiquitous Firmicutes/Bacteroidetes ratio in the context of the gut microbiome and obesity 61,62 .
Secondly, arithmetic aggregation of OTUs to a higher taxonomic base level prior to trac or sparse logcontrast modeling did not result in significant predictive performance gains. In fact, using OTUs as base level, at least one of the three statistical methods showed superior test error performance while maintaining a high level of sparsity. These results suggest that a user may safely choose the highest level of resolution of the data (e.g., mOTUs, OTUs, or ASVs) in (weighted) trac models without sacrificing prediction performance.
Thirdly, while standard trac models always showed good predictive performance on out-of-sample test data, our comparative and average analysis indicated that weighted trac and sparse log-contrast models can outperform the parsimonious trac models in terms of test error, particularly on environmental microbiome data. For instance, on Central Park soil data, we observed moderate performance gains using weighted trac, and on marine data (see Extended Results in the Supplementary Materials for the Fram Strait dataset), sparse log-contrast models showed, on average, the best predictive performance. These results add a valuable piece of information to the ongoing debate about the usefulness of incorporating phylogenetic or taxonomic information into statistical modeling. For example, the authors in 63 convincingly argue that incorporating such information provides no gains in microbial differential abundance testing scenarios.
We posit that, in the context of statistical regression, full comparative trac analyses like the ones presented here, can immediately determine in a concrete and objective way whether phylogenetic or taxonomic information is useful for a particular prediction task on the data set of interest.
The trac framework naturally lends itself to several methodological extensions that are easy to implement and may prove valuable in microbiome research. Firstly, as apparent in the gut microbiome context, inclusion of additional factors such as diet and life style would likely improve prediction performance. This can be addressed by combining trac with standard (sparse) linear regression to allow the incorporation of (non-compositional) covariates into the statistical model (see, e.g., 64 ). Secondly, while we focused on predictive regression modeling of continuous outcomes, it is straightforward to adopt our framework to classification tasks when binary outcomes, such as, e.g., case vs. control group, or healthy vs. sick participants, are to be predicted. For instance, using the (Huberized) square hinge loss (see, e.g., 65 ) as objective function L(·) in Eq. (2) would provide an ideal means to handle binary responses while simultaneously enabling the use of efficient path algorithms (see 35 and references therein). Thirdly, due to the compositional nature of current amplicon data, we presented trac in the common framework of log-contrast modeling. However, alternative forms of tree aggregations over compositions are possible, for instance, by directly using the relative abundances as features rather than log-transformed quantities. Tree aggregations would then amount to grouped relative abundance differences and not log-ratios, thus resulting in a different interpretation of the estimated model features.
In summary, we believe that our methodology and its implementation in the R package trac, together with the presented reproducible application workflows, provide a valuable blueprint for future data-adaptive aggregation and regression modeling for microbial biomarker discovery, biogeography, and ecology research. This, in turn, may contribute to the generation of new interpretable and testable hypotheses about host-microbiome interactions and the general factors that shape microbial ecosystems in their natural habitats. www.nature.com/scientificreports/ | 9,562 | 2020-09-01T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Smart Products Enable Smart Regulations—Optimal Durability Requirements Facilitated by the IoT
: The challenges and opportunities linked with IoT have been intensively discussed in recent years. The connectivity of things over their entire life cycle and the smart properties associated with it provide new functionalities and unprecedented availability of (usage) data. This offers huge opportunities for manufacturers, service providers, users, and also policymakers. The latter may impact policy areas such as the regulations on resource and materials efficiency under the Ecodesign Directive 2009/125/EC. With the general approach as it is practiced today, legal requirements are usually set for entire product groups without considering the products individually, including user behavior and environmental conditions. The increasing number of smart products and the growing availability of product data are sparking a discussion on whether these requirements could be more product and application-specific. This paper presents a method for calculating the economically and ecologically optimal durability of a product. It allows determining the point in time when a product should be replaced by combining consumer data with product design data. This novel approach could contribute to making product regulation more flexible and possibly more efficient. In this context, fundamental challenges associated with smart products in policymaking are also discussed.
Introduction
Digitization is fundamentally changing both everyday life and the economy. The development of the Internet of Things (IoT) is accelerating, and the number of connected things was expected to rise from 8.4 billion in 2017 to more than 20 billion by 2020 [1]. The term IoT, first established by Kevin Ashton in 1999 [2], describes "an ecosystem in which applications and services are driven by data collected from devices that sense and interface with the physical world" [3,4]. Nowadays, also referred to as 'The Internet of Everything' [5], it describes a phenomenon where processes, people, data, and everyday objects are connected to the Internet to achieve specific goals.
Smart, connected things consist not only of conventional physical components but also of "smart" ones (e.g., sensors, controls, software) and connectivity components (e.g., ports and protocols). These smart, connected products have advanced capabilities and functions and generate an immense amount of new (and sensitive) data [6]. The availability, use, processing, and analysis of this big data offers unprecedented opportunities [3,[7][8][9][10][11][12]. The level of feedback that producers can now receive-in real time-provides opportunities for product innovations and customized services. Further, it has implications for overall lifecycle management, and specifically for the monitoring of energy efficiency and replacement management of these smart products-with possible implications also for product regulation. By 2021, connected home appliances, such as connected white goods (refrigerators, washing machines, dishwashers, etc.) will account for almost half of all machine-to-machine connections, demonstrating the relevance of IoT in consumers' everyday lives [3]. The regulation of these white goods in Europe has been established by analyzed, and information on required spare parts could be provided. Status monitoring made it possible "to estimate its residual value to take timely decisions on when it has reached the end of its environmentally sustainable lifetime" [20]. However, such information on degradation and maintenance requirements is not only valuable for the producers and other economic stakeholders along the value chain.
Li et al. [23] discuss the role of IoT for Ecodesign in consumer electronics, pointing up potential enhancements, e.g., for saving resources. Askoxylakis [24] presents a framework for pairing CE and the IoT, emphasizing the potential to augment resource productivity, e.g., through monitoring and data management along the life cycles of products. How the IoT can offer new solutions to product lifecycle management is presented by Menon et al. [25], focusing on the role of specific platforms with "the ability to access, manage and control product-related information across various phases of lifecycle" for augmented insights into the condition of the products and potential need for maintenance [25].
Considering the growing regulatory pressure towards more sustainability, Zhang et al. [26] discuss how IoT technologies could enhance lifecycle information management in the automotive industry, providing information at all life stages and thus improving the recycling and recovery rate [26]. The availability of IoT data may also be relevant for regulation itself. Meanwhile, the role of IoT for smart government is broadly discussed [27]. However, there is not yet a systematic and strategic use of IoT and effective exploitation of its potential [28]. Metallidou et al. [29] investigated how the IoT can impact the efficiency of smart cities, highlighting improved energy performance certificates for smart buildings demanded by recent European legislation.
Our paper goes further, proposing an IoT-enabled calculation method for the economically and ecologically optimal durability of a product that can be used for a more flexible and effective Ecodesign regulation. We show how IoT data could contribute to higher flexibility and specificity for individual appliances and technologies in the framework of the EU Ecodesign Directive, as addressed in more detail in the next subsection.
Towards a More Flexible Regulation for Energy-Related Products
The Ecodesign Directive 2009/125/EC has established a framework setting Ecodesign requirements for energy-related products [13]. Since "Energy-related products account for a large proportion of the consumption of natural resources and energy" within the European Union, the Directive strives to encourage improvements in the overall environmental impact of these products. Although the Directive primarily addressed energy efficiency, the integration of resource efficiency has recently been discussed [30,31] and addressed in several political initiatives [32][33][34][35]. In addition to the focus on energy efficiency and the use phase, requirements are set only for product groups and not for individual technologies and applications. As outlined in the previous subsection, smart products' connectivity allows more information to be obtained and analyzed about the product's usage profile and environment. In addition, online monitoring of product performance parameters leads to better diagnostics. The need for maintenance, repair action, or the entire durability of every single product can be more accurately predicted. The obtained data can be used to calculate the point of time when it is more appropriate to replace a product than to maintain or repair it, covering both perspectives, economic (lower energy costs and costs for repair and maintenance) and ecological (lower consumption of energy and raw materials). This is not only of interest to producers and consumers but can also have implications for policymaking processes, as the data can help answer fundamental questions, such as "What is the optimum durability of a product?" For energy-related products regulated under the Ecodesign Directive [13], regulations have been developed, including resource efficiency requirements [36][37][38][39][40][41][42][43][44]. These regulations are intensively discussed by stakeholders during the policymaking process [45]. The requirements include, among others, durability, availability of spare parts, software updates, and product reparability. The Methodology for the Ecodesign of Energy-related Products (MEErP) considers various economic and ecological aspects [46]. As part of this methodology, life cycle assessments referring to the mass of different materials used to manufacture an average product (the "base case") are calculated using the EcoReport tool [47]. This tool uses a reference list of conversion factors that must be multiplied by the mass of the materials on the bill of materials (BOM). The results include, among others, CO 2 equivalents for each material or material fraction and, as a sum, for the entire product. This reference list is not updated regularly and is currently part of the revision of the entire methodology. In addition, the calculation of the ecological footprint of a product also considers the energy consumption during the use phase. However, in the past, more assumptions were made, and no data collected in situ were used. Instead, theoretical usage profiles and test results under controlled laboratory conditions are often used, which only partially reflect the product's real applications and thus the consumer behavior. Therefore, alternatives or useful additions are required, which the study presented here may lead to. Considering the further shift of the Ecodesign Directive towards the inclusion of resource efficiency as well as the new opportunities presented by IoT, this paper presents a calculation method that allows manufacturers to identify the point of time when it is more efficient (from an economic and ecological perspective) to replace a product rather than to repair it. In the future, it could be applied by policymakers to set up adequate requirements for product durability, availability of spare parts, software and firmware updates for each type of product under one product group and for each type of application. This would allow for more flexible regulation and consider user behavior in a deeper manner.
Materials and Methods
This section introduces a calculation method for determining the ecologically and economically optimal time to replace a product in use. The application of the calculation method results in a critical durability limit (the durability is expressed as calendar time, number of cycles, etc., related to the product-specific application). If this limit is exceeded, it would be beneficial to replace a product in use with a new product that is likely to be more efficient from an ecological (Section 2.1) or economic (Section 2.2) perspective. Finally, further influences due to aging/wear-out, maintenance, and repair (Section 2.4) are presented.
Input Data
The data required for the application of our calculation method are (i) the bill of materials (BOM) of the related product, (ii) conversion factors (CF) for converting the masses of the materials used into CO 2 -equivalents (guidance and data are freely or commercially available, e.g., under [46,48,49]) and (iii) the energy consumption of the related product during the use phase. It is necessary to define the units; therefore, either each energy used for production, use, etc., shall be converted into CO 2 -equivalents, or each CO 2 -equivalent for each material used shall be converted into an energy unit.
Environmental Impact of a New Product
The environmental impact includes the provision of the materials and energy used to manufacture the product. First, each mass of a specific material m mat,i must be multiplied by the related CF mat,i and the sum calculated. Finally, the energy consumed during the manufacture of the product E process must also be considered. Both the impact of the materials and the production process itself represent the environmental impact of the product E prod : n = number of different materials of the product.
Environmental Impact of a New Replacing Product
The same procedure as described in the previous subsubsection is applied to an alternative, a new product, which is considered to be a replacement for the product in use. This time, the BOM of the new product, the same reference list of CF mat,i and the energy used to manufacture the new product E process,repl must be used. In addition, the environmental impact of the product still in use, E prod , must be added since the product in use was also manufactured in the past. All two aspects, i.e., the impact of the materials needed for production, the energy consumption of the production process itself, and the impact of the product still in use, represent the environmental impact of the product E prod,repl under consideration for replacement: l = number of different materials of the replacing product.
Energy Consumption of a Product in Use
To compare the energy consumption (typically expressed in kWh) of a product during the operation E use with the energy consumption of a new replacing product, two time periods must be included in the calculation: First, the period from the first use of the product t 0 to the point of time t 1 (∆t 1 ), when an alternative product is considered to replace the previously used product. Second, the period from the first time an alternative product is considered to replace the product still in use t 1 to a future point of time t 2 (∆t 2 ). In both periods, the input power P use needs to be considered:
Energy Consumption of a New Replacing Product
For the energy consumption of the replacing product E repl , the input power P repl of this new product shall be considered from the point of time t 1 at which the new product is to replace the product still in use, until a future point of time t 2 . For comparison with the product still in use, the input power of the product still in use P use required over the period from t 0 until t 1 must be considered since the energy has already been consumed by that point of time t 1 :
Input Data and Parameters
The input data for the calculation method presented here are the price of the product, the currently valid average electricity price for the consumer, the costs for transport and storage of the product, and, if applicable, for the disposal of the replaced product.
Economic Impact of a Product in Use
The economic impact for the consumer C prod is the price paid at the point of sale. It includes all manufacturers' costs in terms of materials used, salaries, energy consumption in production, transport and storage, marketing, profit margin, sales, etc. Costs do not need to be considered separately, as all costs and profits are reflected in the price of the product.
Economic Impact of a New Product
For a new product to replace a product in use, the same aspects as described in Section 2.2.2 must be considered. In addition to the price paid for the product that is considered a replacement C repl , the economic impact of the product still in use C prod must be added since the product in use was also purchased by the same consumer in the past. Therefore, the economic impact of a product intended as a replacement for a product C prod,repl that is in use is the sum of the prices paid by the consumer for both products.
Running Costs for a Product in Use
The costs for the consumer during the use phase of the product C use must be calculated, as also described in Sections 2.1.4 and 2.1.5, considering two different time spans. First, the time span from the first use of the product still in use t 0 to the point in time t 1 when an alternative product is considered a replacement for the product still in use. Second, the time span from the first time at which an alternative product is intended to replace the product still in use t 1 until a future point of time t 2 . The result of the application of Equation (3), which describes the energy consumption of a product in use over the previously mentioned time spans, can be directly multiplied by the currently valid average electricity price c pow (consumer price). This results in the costs that the consumer must pay for the operation of the product (additional impacts are discussed in Section 2.4 and are not considered here for simplification): 2.2.5. Running Costs for a New Product As described in Section 2.1.5, when calculating the cost for operating a new product as C use,repl a replacement for another product in use, the power consumption at the point of time when the replacement should take place t 1 must be taken into account. The results of applying Equation (4) can be directly multiplied by the average energy price c pow :
Total Ecological and Economic Impacts
Including all input data from Section 2.1, the total ecological impact of the product still in use can be calculated by applying Equation (8) and that of the product considered a replacement for the product in use can be calculated by applying Equation (9): Including all input data from Section 2.2, the total economic impact of the product still in use can be calculated by applying Equation (10) and the product that is considered a replacement for the product in use can be calculated by applying Equation (11): At the point of time t crit,E , when E total = E total,repl , a product in use and a product considered to replace the product in use, have the same ecological impact. At a point of time later than t crit,E , a new product as an alternative to a product in use would lead to a lower ecological impact. Similarly, another point of time t crit,C , can be determined. This is shown in Figure 1.
Additional Influences
The performance of the products may change over time, and repair of the products need to be considered. In this subsection, these influences, especially on the parameter t crit , are explained.
Energy Efficiency as a Function of Time
The required input power of a product is theoretically constant during its lifetime. In this way, the cumulative energy consumed during a certain time increases linearly as a function of time. Any change in energy consumption required to keep the product functional, e.g., due to component wear or loss and aging of lubricants, will gradually have an effect and increase the environmental impact of the product. The environmental impacts from Section 2.1 do not include wear or fatigue of components, aging of lubricants, etc., to keep the approach presented independently of the type of energy-related product and its technology. However, if such additional influences are considered when applying the presented approach, increasing demand for energy required for the intended function of the product can be considered by a correction factor in Equations (3) and (4). Then the influence of these kinds of effects over time must be determined, described by a function, and this function must be added to the previously mentioned equations. In addition, the point of time of t crit,E depends strongly on the difference between the energy efficiency of both products and the energy used to manufacture the products. The less energy consumed to manufacture the product and the more efficient the product that is considered a replacement for the product in use, the earlier t crit,E occurs.
At the point of time tcrit,E, when Etotal = Etotal,repl, a product in use and a product considered to replace the product in use, have the same ecological impact. At a point of time later than tcrit,E, a new product as an alternative to a product in use would lead to a lower ecological impact. Similarly, another point of time tcrit,C, can be determined. This is shown in Figure 1.
Additional Influences
The performance of the products may change over time, and repair of the products need to be considered. In this subsection, these influences, especially on the parameter tcrit, are explained.
Energy Efficiency as a Function of Time
The required input power of a product is theoretically constant during its lifetime. In this way, the cumulative energy consumed during a certain time increases linearly as a function of time. Any change in energy consumption required to keep the product functional, e.g., due to component wear or loss and aging of lubricants, will gradually have an effect and increase the environmental impact of the product. The environmental impacts from Section 2.1 do not include wear or fatigue of components, aging of lubricants, etc., to keep the approach presented independently of the type of energy-related product and its technology. However, if such additional influences are considered when applying the presented approach, increasing demand for energy required for the intended function of the product can be considered by a correction factor in Equations (3) and (4). Then the influence of these kinds of effects over time must be determined, described by a function, and this function must be added to the previously mentioned equations. In addition, the point of time of tcrit,E depends strongly on the difference between the energy efficiency of both products and the energy used to manufacture the products. The less energy consumed to manufacture the product and the more efficient the product that is considered a replacement for the product in use, the earlier tcrit,E occurs.
Repair (and Maintenance)
Both the product in use and the product that is considered a replacement for the product in use must be repaired after certain time spans. For the sake of simplicity, the same number of repair actions, the same relative point of times for repair, and the same type of spare parts are assumed. impacts of the product in use E total and C total and of the product to be considered a replacement for the product in use E total,repl , and C total,repl are depicted. t crit,E , and t crit,C represent the points of time when the ecological or economic impact of a new product replacing a product in use would be less than the product's impact in use.
Repair (and Maintenance)
Both the product in use and the product that is considered a replacement for the product in use must be repaired after certain time spans. For the sake of simplicity, the same number of repair actions, the same relative point of times for repair, and the same type of spare parts are assumed.
The environmental impact of the spare parts, based on the sum of the materials used for production multiplied by the corresponding CF (as described in the first two subsections of this section) and the energy used for the manufacture, storage, and transport E rep of the spare parts must be considered. The contribution of the spare parts to the environmental impact of the product still in use E rep and the product that is to replace the product still in use E rep,repl is equal. However, the spare parts are used at different points of time for the two products, so the contribution to the total environmental impact must be taken into account separately: n = number of different materials used for the original product before t 1 , r = total number of different materials used, m = number of spare parts of the original product before t 1 , l = total number of spare parts.
The impact of repair also needs to be reflected in the calculation of the ecological impact, and the Formulas (8) and (9) are extended accordingly: E total, repl,r = E prod,repl + E use,repl + E rep,repl (15) Analogously, the economic impact on the total economic impact can be calculated: The points of time at which maintenance and repair are required are often different. Maintenance is carried out at fixed time intervals and is repeated as long as the product is in use (e.g., a filter must be replaced after a certain number of operating hours). This can be done preventively or after the failure of a component that requires frequent maintenance (ink cartridge needs to be refilled in a printer). Therefore, the contribution of product maintenance to the environmental impact and running costs of the entire product is linear on average and is neglected here. In contrast, repair actions are performed on-demand at discrete points of time and are likely to be required more often the longer the product is in use. Consequently, the contribution of the repair to the total impact of the entire product increases as a function of time. If repairs are less necessary the longer the product is in use (rather unlikely), the contribution of the repair to the total impact would decrease over time. Only if the repair actions are carried out at equal time intervals and always the same component must be replaced by the same spare part, the contribution would be linear. In the latter case, the manufacturer would possibly define the process as maintenance rather than repair. If the intervals between the repair actions decrease during the product's lifetime, t crit occurs at an earlier point of time during the lifetime of a product. Accordingly, replacing a product in use with a new product would lead to a lower overall impact of the product at an earlier stage in the product's lifetime. The contributions of the different time intervals of repair are summarized in Figure 2.
Results
In the introduction, the potential of IoT and big data was presented from a business perspective, but also for consumers and policymakers. However, the whole concept presented in this study strongly relies on the availability of data on the performance and usage of the devices that need to be retrieved, processed, and analyzed. To take advantage
Results
In the introduction, the potential of IoT and big data was presented from a business perspective, but also for consumers and policymakers. However, the whole concept presented in this study strongly relies on the availability of data on the performance and usage of the devices that need to be retrieved, processed, and analyzed. To take advantage of the opportunities offered by the new paradigm and the calculation method presented here, several challenges [19,50] must be overcome, relating to technology, users, and legal issues regarding security, privacy, and accountability [51].
Technical Challenges
So far, the smart home market is quite fragmented. This situation makes the real integration and interoperability of heterogeneous systems and technologies [52] difficult. Standards are needed to enable their seamless integration to truly take value from the potential offered by IoT in terms of exploiting the generated and tracked data [19,53]. In addition, the current internet architecture and network capabilities are still too limited in terms of scalability or manageability [52]. There are also challenges in dealing with big data: Adequate data management and mining for the vast amounts of complex, multi-source, heterogeneous, unstructured data are just as necessary, as well as methods and capabilities for analysis to derive value from the opportunities [17,19,54]. Due to the fragmentation of technology and the issues of technology and system integration, we are still far from a truly connected smart home ecosystem, where many users have only a single smart appliance that limits a full value proposition.
Challenges Related to Consumers/Users
Despite growing familiarity with the technology, setting up and using smart home devices is still not easy-or desirable-for everyone, and studies show that there is still a discrepancy between the number of smart devices sold and the number of those actually used with their smart features. Since many of the appliances, such as dishwashers or washing machines, work well without a connection, users refrain from connecting them to the Internet [14]. While some might simply not appreciate the potential added value, others are reluctant for privacy reasons [16], as they have concerns about data security and integrity. As a result, such issues with vulnerabilities of IoT devices, cybercrime, and attacks need to be solved to support wider adoption [17]. Building trust in smart home appliances and trust in the privacy, safety, and security of user data is a technical, psychological as well as legal challenge facing industry and policymakers.
Legal Challenges
The calculation of the total impact of a product shall include consideration of the energy consumption during the use phase of the product. Since the products are connected to the Internet (typically wireless) and the data are constantly monitored by third parties, the privacy of consumer data must be protected. In addition, consumers' trust must be enhanced, and consumers must be subsequently motivated to give their consent to share their data. Studies on consumer perception show that consumers are becoming increasingly aware that data is collected and commercialized and that they differentiate their personal data according to who uses the data and what the personal benefit is [55]. The systematic collection and processing of data are acceptable to the consumer at some point, e.g., in smart meters, when consumers can benefit from cost-saving potentials [56]. The General Data Protection (GDPR)/Regulation (EU) 2016/6791 entered into force in the European Union in May 2018. Accordingly, all companies that handle the personal data of EU residents are subject to the GDPR. The GDPR defines the legal basis for the processing of data, which (among other options, Article 6) requires that either the owner of the data has given his consent or that it is the data necessary for the performance of a task in the public interest or in the exercise of official authority. In our case, either the consumer must give his consent (in order to be persuaded), or the consumer must be forced to include the right of the authorities to collect personal data in an appropriate directive or regulation. A voluntary provision of data would be beneficial, firstly because of the questionable legitimacy of collecting and processing personal data in the interest of resource and energy efficiency and, with this, secondly, to increase the general acceptance of the approach presented in this paper. Furthermore, regarding the question of ownership of personal data in IoT, it needs to be clarified whether the data of connected goods fall under the category of personal or non-personal data [57] since the GDPR defines personal data in Article 4 as "information relating to an identified or identifiable person ("data subject") [ . . . ]. The example of the use of Smart City data [58] raises the question of whether the same logic can also be applied to, e.g., washing machines. Connected appliances are also vulnerable to potential attacks by independent third parties that pose a constant security risk. Among the most popular threat tools are malware or botnets [59] that lead to threats such as denial of service or unauthorized access [2]. Companies would have to ensure that they provide sufficient software updates to ensure the safety and security of their products once they are released on the market [60][61][62][63]. So far, however, manufacturers have apparently and repeatedly failed to produce secure products [64], which, according to Moore [65], can be explained by misaligned incentives, information asymmetries, and externalities related to the economics of cybersecurity. To overcome the barriers faced by manufacturers, possibilities such as ex-ante safety regulation, ex-post liability, or disclosure of information have been proposed [65]. Certification based on standards as underlying requirements has been identified by European policymakers as one instrument to address this issue [66], with the EU Cybersecurity Act (Regulation (EU) 2019/881) "introducing for the first time an EU-wide cybersecurity certification framework for ICT products, services and processes" [67]. In view of the approach presented in this paper, companies offering connected products should, therefore, be encouraged to take appropriate steps towards enhancing cybersecurity. Certification can, therefore, help to build consumer trust based on the signaling theory [68] and encourage consumers to give their consent to sharing their data.
Discussion
IoT leads to innovative and individual product applications. In this context, the legal requirements affecting these products may also need to be reconsidered. Hence, product design studies of manufacturers have required that the environmental conditions and use profiles of products be assumed when they are in use by consumers. However, this information is used, among other things, as input data for testing and simulating the product's durability, future need for maintenance, repair, and additional services [69]. In parallel, there are requirements for placing a product on the market. These requirements are also developed based on average product models (representing whole product groups or sectors of product groups), standard environmental conditions, and use patterns. These simplifications of the real use conditions lead to relatively nonspecific product designs and legal requirements. With the availability of large amounts of real product data associated with IoT, opportunities arise to improve and optimize both product design and legal requirements. Optimized product design based on data generated by increased connectivity is widely discussed in the literature [7][8][9][10][11][12]. What is missing, however, are discussions about optimized regulatory requirements taking advantage of the new data available through IoT.
The calculation method presented in this paper allows the product requirements to be defined for each product individually, considering the environmental conditions under which the product performs its function. It represents a fundamental shift from the current approach, where product requirements have fixed values, e.g., for the availability of spare parts or software and firmware updates and durability. Fixed values do not accurately represent all various products, usage profiles, and environmental conditions. Setting requirements for a relative point of time (when a product in use is less efficient than a possible new product) as an alternative for an absolute point of time (as practiced in policymaking today) would be much more effective. In addition, the efficiency of an entire population of a product can be determined online in the case that a representative number of consumers are willing to share their data. t crit is the factor that could be used as a reference point for the policymaking process and is, therefore, the scope of the paper presented here-the following requirements could be improved or set, respectively:
1.
By using the method presented in this paper, the manufacturer would be able to calculate that replacing a product in use with a new product sold on the market at a comparable price would lead to ecological and economic advantages. In a connected world, the manufacturer may be in a legal position to no longer support the product, e.g., no longer supply software updates or spare parts-which would lead to a phaseout of the products from the market. The conditions under which the availability of spare parts, software updates, etc., is mandatory can be set by policymakers, e.g., defining that a product must be supported by the manufacturer until at least XY% of the products in use are less efficient compared to new products of the same brand and price category. This would make the regulation of reparability of energy-related products more flexible and efficient. It is certainly up to the user, as the owner of the energy-related product, to decide whether to continue using the device after this point has been reached; 2.
The calculation is described above is only possible by assuming that the new product would last until the t crit is reached. This means that the durability of a new product is higher than the interval between t 1 and t crit . In case a manufacturer declares that a product replacement would be more ecological and/or economical and no longer needs to be supported, the manufacturer must state the expected durability of the new product (at least in the declaration of conformity). The policymaker may not regulate the durability itself but may set an obligation to repair the product(s) during the stated durability. This would also make the regulation of the durability of energy-related products more flexible.
In summary, legal requirements can be set that must be fulfilled until after a defined number of products have reached a certain state-instead of regulating fixed time spans. This would lead to a very flexible product regulation that addresses any application and even environmental condition.
Conclusions
From a political point of view, a "planned obsolescence" as a requirement of the Ecodesign Directive could be an effective, flexible and dynamic approach to set requirements for the availability of product information, software updates, spare parts, etc., as well as for setting minimum durability requirements. This would effectively improve the circularity of energy-related products, as mentioned and supported, e.g., in the European Commission's circular economy action plan [35]. In addition, the data from smart products will enable an easier prediction of spare parts needs and software and firmware updates. Stocks of spare parts can be adapted to the actual demand, ensuring availability according to legal requirements. Not only technical feasibility in terms of connectivity and data gathering is essential for the functioning of the calculation method presented in this paper. It also requires the consumer's acceptance and consent so that this product can be digitally monitored and information on his or her behavior (concerning the use of the products) can be transmitted to the manufacturer. Data privacy and security issues must be taken seriously. Time will tell whether the issues discussed in this paper are the future perspective of policymaking in a smart world where the product works. In any case, smart products operating within the scope of the IoT should also be regulated smartly. | 8,344 | 2021-04-15T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science",
"Law"
] |
Examining factors for the adoption of silvopastoral agroforestry in the Colombian Amazon
Current land use systems in the Amazon largely consist of extensive conventional productivist livestock operations that drive deforestation. Silvopastoral systems (SPS) support a transition to low carbon production if they intensify in sympathy with the needs of biophysical and socio-economic contexts. SPS have been promoted for decades as an alternative livestock production system but widespread uptake has yet to be seen. We provide a schema of associating factors for adoption of SPS based on past literature in tropical agriculture and apply this to a bespoke survey of 172 farms in the Caquetá region of the Colombian Amazon. We find a number of factors which do not apply to this region and argue for a context specific approach. The impact of managing increased market access and opportunities for SPS producers are crucial to avoiding additional deforestation. Further understanding of the underlying antecedents of common factors, such as perceptions of silvopastoral systems, would reduce the risk of perverse policy outcomes.
Scientific Reports
| (2023) 13:12252 | https://doi.org/10.1038/s41598-023-39038-0 www.nature.com/scientificreports/ The paper is structured as follows. A conceptual background is presented which summarises the significant number of factors explored by past studies on SPS adoption. We then set multiple hypotheses based on these studies and test these through a bespoke survey of farmers within this region with the aim of comprehensively exploring each barrier. Results are presented with the aim of testing the key drivers for adoption. This is followed by a discussion of the results and implications for interventions to support transition to SPS. There is a substantial and growing literature on SPS adoption barriers in tropical agriculture. Figure 1 summarises these studies across various contexts, and they identify a range of biophysical, economic, social-cultural and perceptual factors. Figure 1 and the accompanying literature provides the basis for a series of individual hypotheses. These are listed in Table 1 and specify separate components of the adoption problem around SPS. Steep slopes (H1) and unfavourable soils (H2) were reported to positively influence adoption 11,24 . Herd size (H3) and farm size (H4) are wealth proxies that indicate the ability of farmers to overcome establishment costs of SPS 25,26 . Capacity is also addressed through labour availability (H5; H6) and these farms are better able to overcome the high labour demands of SPS 27 . Gender has been found to be important and, due to societal gender inequalities, women may face extra barriers to adoption compared to men 22,23 . Length of farming experience tends to lead to more SPS adoption (H8;H9) and age has been found to be a significant factor on adoption (H10) 22,27 . Land tenure security is an important precursor to any type of long-term investment in the land which affects adoption decisions (H11) since there would be little incentive to invest in SPS implementation without tenure 23,28,30 . Market access is a well-documented determinant of adoption and has been proxied by several different variables, including the distance to a municipal centre (H12) or to a main road (H13). The opportunity costs of implementing SPS (H14) are higher when household members have off-farm incomes 6,22,28,29 .
Knowledge sharing influences adoption, especially through neighbouring farmers 23 , and this effect is, presumably, amplified when the neighbours are SPS-adopters (H21). Adoption is positively influenced by training in SPS provided by organizations that promote agroforestry since they close the knowledge gaps that impede adoption (H22) 2,11,26,30 . Farmers who have completed secondary school are more likely to understand the underlying concepts of SPS and are therefore more likely to adopt (H23) 16,22 . Membership in farmer's associations influences community knowledge sharing and has been found to have a positive influence on www.nature.com/scientificreports/ adoption (H24) 11,23,24 . Hypotheses H21-H24 lead to proxies for perceived understanding and confidence of SPS (H25), have the skills to implement SPS (H26) and the ability to explain SPS (H27) 6,22,28,31 .
Results and discussion
The results are shown in Table 2. We find no significant association between SPS and slope (H1) and soil order (H2) for these farmers in Caquetá. This is converse to previous studies 11,13,24 . Slope and soil are expected to be context dependant, and this may be the case here. Moreover, we examined soil order, rather than quality or sand proportion as used in previous studies, hence this may provide another dimension to understanding how soil influences SPS adoption. Despite gender being only weakly significant (H7), we find a higher proportion of female head of households will adopt SPS. Nevertheless, the majority of adopters are mostly male. Previous studies have argued that due to gender inequalities women have less access to credit, income, and equipment and this acts as a barrier to adoption to SPS 22,23,32 . The roles that women hold in Latin American cattle ranching operations are often discounted 33 therefore, their contributions to various aspects of cattle management, such as milking, albeit significant, are often overlooked.
Some studies found length of experience to positively affect adoption of SPS, albeit with low or no statistical significance 22,27 , whereas we find the converse (H8, H9). Those variables related to experience (years on current farm and livestock experience) showed significant and negative associations with adoption, meaning that increased experience of agricultural activity decreases the likelihood of adoption. This aligns with literature on the role of farming experience which locks farmers into productivist practices compared to investment in alternative systems such as SPS 34,35 . According to systems thinking 36 , paradigms, such as embeddedness in a productivist mindset, are the intervention points in a system that are the most resistant to change but yield greater results in application. Therefore, if the negative association between experience and SPS adoption in Caquetá is a result of productivist paradigms, addressing these paradigms could generate substantial increases in adoption rates. Table 1. Description of Hypotheses. * In Colombia, "veredas" are the smallest type of subnational boundary and are spatially equivalent to a sub-municipality or neighbourhood.
Biophysical factors H1. Farms in regions with a steep slope exhibit higher adoption rates than those in shallow sloping regions H2. Farms located in veredas* with better soil quality exhibit higher adoption rates than farms in regions with less sandy soils Production and social factors H3. SPS are more likely to be adopted by farms with larger herds H4. SPS are more likely to be adopted by larger farms H5. SPS are more likely to be adopted by farms with more available household labour H6. SPS are more likely to be adopted by farms with more available hired labour H7. SPS are more likely to be adopted by male farmers H8. Farmers with more years at current farm will adopt SPS H9. Farmers with more livestock experience will adopt SPS H10. SPS are more likely to be adopted by older farmers Economic factors H11. SPS are more likely to be adopted by owner-occupiers H12. SPS are more likely to be adopted by farms with better market access, where market access is proxied by: proximity to municipal centre H13. SPS are more likely to be adopted by farms with better market access, where market access is proxied by: proximity to a main road H14. SPS are less likely to be adopted by farmers with off-farm jobs Farmer perceptions H15. SPS are more likely to be adopted by farmers who perceive the benefits to profitability H16. SPS are more likely to be adopted by farmers who perceive the benefits to pest reduction H17. SPS are more likely to be adopted by farmers who perceive the benefits to product quality H18. SPS are more likely to be adopted by farmers who perceive the benefits to cost reduction H19. SPS are more likely to be adopted by farmers who perceive the benefits to milk production H20. SPS are more likely to be adopted by farmers who perceive the benefits to cattle reproduction Factors related to information and education H21. SPS are more likely to be adopted by farmers when other SPS-adopters are in their vicinity H22. SPS are more likely to be adopted by farmers who have been trained in SPS H23. SPS are more likely to be adopted by farmers who have completed secondary school H24. SPS are more likely to be adopted by farmers who are members of a farmer association H25. SPS are more likely to be adopted by farmers are confident in SPS* H26.SPS are more likely to be adopted by farmers who have skills needed for SPS H27.SPS are more likely to be adopted by farmers who have the ability to explain SPS www.nature.com/scientificreports/ Access to markets has been found to be a driver for agricultural intensification 19 but also for SPS adoption 11,37,38 and proximity to main roads act as a proxy for this market access (H13). Here we find this is not significant and proximity to municipal centre, another indicator of market access, to be weakly associated with SPS (H12). Farmers adopting SPS are more distant from the municipal centre than non-SPS adopters. Road development is a well-known driver of deforestation both in the Amazon and internationally, therefore this approach more likely leads to adverse impacts on forest ecosystems 17,39 . Perhaps a more important limiter of market access is the lack of local markets resulting from the low population density observed in forest frontier areas within Caquetá that results from the low labour demands of extensive traditional ranching 38,40 . SPS has been found to support sustainable and profitable livelihoods in the Colombian Amazon 41 therefore, if the other barriers to SPS are dismantled to the point where adoption becomes widespread, the concentration of people seeking SPS-based livelihoods would contribute to increasing population density and the subsequent revitalisation of local markets. Non-state actors have an advantage compared to centralized governmental programs in addressing issues at a highly local scale, for example by helping farmers to overcome regulatory market barriers 37 .
All of the six variables which reflect farmer perceptions (H15-H20) exhibited highly significant and positive associations with SPS, highlighting the importance of exploring perceptions towards the benefits of SPS. These include both perceptions of economic factors, such as yield and profits, but also pest management and cattle reproduction 37,42 . Changing perceptions would be a key route to adoption and several mechanisms, such as information exchange and education have been proposed to raise awareness of SPS in these farming populations 11,24,43 . Table 2. Summary of results, strength of effects and p-values for each hypothesis. Sig. * < 0.05, ** < 0.01, *** < 0.001. *Relates to the statement 'I am confident that I could use different silvopastoral practices in my farm if I wanted to' . ^Relates to the statement 'I have the skills, experience, and knowledge required to use silvopastoral practices in my farm' . ~Relates to the statement 'I could clearly explain to other farmers the impact that the use of silvopastoral systems has on the farm' . www.nature.com/scientificreports/ Another significant positive effect on adoption was farmers' proximity to other adopters (H21). Where there are existing silvopastoral farms in veredas farmers are more likely to adopt SPS. This is a result of community knowledge sharing, a commonly reported determinant of adoption across Latin America 11,24 . A similar proximity effect was found in Argentina 23 . This suggests a spatial effect in which intra-vereda knowledge exchange occurs.
Specialised SPS training is positively associated with adoption (H22). The training of farmers in SPS is a strategy commonly suggested in the literature for raising adoption of SPS 2,24,26,31,44 . Like the perception of benefits, the understanding and confidence in SPS (H25-H27) exhibited slightly significant positive effects on adoption. Adoption was higher among farmers that either had confidence in their ability to implement SPS, had skills needed for SPS, felt that SPS were understandable, or were able to explain SPS to other farmers. The absence of knowledge gaps-in other words, the understanding of SPS-is a strong and significant determinant of adoption 27,30,31,37 . Both governmental and non-state actors, it has been argued, can contribute to closing these knowledge gaps via marketing, workshops on SPS, and specialised extension services 37 .
Discussion
Colombia in the post-agreement landscape has experienced a range of demands on its future land use with strong climate commitments that support zero deforestation 45 . Silvopastoral systems support a transition to low carbon production but only if they intensify in sympathy with the needs of biophysical and socio-economic contexts 46,47 . Managing this transition requires locally targeted solutions and, in providing an overview of these key constraints to uptake, we find that adoption of SPS is context specific. A number of common factors associated with supporting uptake of this practice were not found to be applicable in the Caquetá region of the Colombian Amazon.
A key factor of concern is the role of increasing market access which has been found to be both a driver for deforestation but also for SPS in previous studies. In our context we find further distance to market leads to more SPS adoption and argue for establishment of local markets to support this practice. However, if the complexities between economic growth and the intensity of activity and adoption of SPS are not actively managed then this leads to a false pathway for sustainable development, or worse a potential increase in deforestation 30,47,48 .
A positive determinant for adoption is perception of the benefits and the level of understandability of farmers to the SPS system. There will be underlying causes of these perceptions which potentially lie in historic interventions and past engagement with individuals and agencies 49 . Whilst we offer a schema for understanding adoption we consider these factors in isolation to explore their association with adoption and not their causality. It is notable that studies on this topic tend to ignore the underlying causal dynamics of these factors and there are a paucity of studies examining the antecedents of these factors and their instrumentality in forming these perceptions. This is mostly a result of cross-sectional exercises in data collection and the true dynamics of these systems need to be explored further to avoid perverse outcomes from policy prescriptions.
Methods
The Department of Caquetá covers an area of around 89 thousand km 2 (@8% of total Colombian area) and has a variety of cropping and livestock activities. It is the third largest department in Colombia but with low levels of population density. As it sits within the Amazon basin Caquetá has highly important and rich ecological diversity and has a high density of forest cover (Fig. 2). Given it remoteness and position it was heavily affected by the armed conflict and has been the focus for investment and infrastructure support in the post-agreement landscape 50 . Critical land use pressures occur from the illegal cultivation of coca in the region but also mineral and fossil-fuel extraction. Moreover, nearly 60% of rural land in Caquetá is legally informal or imperfect which creates limits on accessing institutional support regimes 51 .
Caquetá is Colombia's fifth largest milk producer which is characterised by smallholder extensive cattle farming. Due to its medium altitude farmers tend to adopt mixed systems of dairying and beef farming with tropically adapted breeds crossed with dairy breeds and yields are low relative to more intensive regions 52 . Agriculture provides the main source of income for local livelihoods and mostly a source for exports for the Colombian economy 53 .
The sampling universe was compiled by working with the local department of Agriculture in Caquetá, with local producers' associations and companies that purchase milk from these producers. This led to an overall sample of 1100 registered farmers in the region, with 112 who were previously identified to have imposed silvopastoral systems on farm. Detailed information was received from companies such as Nestlé, Alimentos GAMAR, the Ministry of Agriculture's Milk Price Monitoring Unit and departmental agricultural leaders. The farm database was created to mobilise a telephone-based survey. This was favoured due to issues around remoteness and accessibility and collating a large enough sample to conduct robust statistical tests. Whilst this imposes some bias, e.g., to larger operations, mobile phone usage is fairly common in the Caquetá region, with farmers using mobile phones as part of their business operations 54 . Farmers were told their participation was voluntary and information that may identify them would only be held on a secure server and not shared with third parties. Structured phone interviews were conducted with farmers across the study area with the aim of collecting an equal sample between adopters and non-adopters across the region. As a result, 172 farms were selected such that 86 (50%) had adopted silvopastoral systems on at least one hectare of land, and the other half had not.
Once completed these data were matched through GPS co-ordinates, located at the centre point of each corresponding vereda, to geospatial variables that had been aggregated to the vereda level using the mean. The spatial variables, which were derived from online sources, were soil type and slope. Soil data was obtained from the website of the Instituto Geografico Agustin Codazzi (IGAC) 55 . Slope data was derived from a global digital elevation model 56 . | 4,011 | 2023-07-28T00:00:00.000 | [
"Economics"
] |
Computational Binding Study Hints at Ecdysone 20-Mono-Oxygenase as the Hitherto Unknown Target for Ring C-Seco Limonoid-Type Insecticides
The insecticidal property of ring C-seco limonoids has been discovered empirically and the target protein identified, but, to date, the molecular mechanism of action has not been described at the atomic scale. We elucidate on computational grounds whether nine C-seco limonoids present sufficiently high affinity to bind specifically with the putative target enzyme of the insects (ecdysone 20-monooxygenase). To this end, 3D models of ligands and the receptor target were generated and their interaction energies estimated by docking simulations. As a proof of concept, the tetrahydro-isoquinolinyl propenamide derivative QHC is the reference ligand bound to aldosterone synthase in the complex with PDB entry 4ZGX. It served as the 3D template for target modeling via homology. QHC was successfully docked back to its crystal pose in a one-digit nanomolar range. The reported experimental binding affinities span over the nanomolar to lower micromolar range. All nine limonoids were found with strong affinities in the range of −9 < ΔG < −13 kcal/mol. The molt hormone ecdysone showed a comparable ΔG energy of −12 kcal/mol, whereas −11 kcal/mol was the back docking result for the liganded crystal 4ZGX. In conclusion, the nine C-seco limonoids were strong binders on theoretical grounds in an activity range between a ten-fold lower to a ten-fold higher concentration level than insecticide ecdysone with its known target receptor. The comparable or even stronger binding hints at ecdysone 20-monooxygenase as their target biomolecule. Our assumption, however, is in need of future experimental confirmation before conclusions with certainty can be drawn about the true molecular mechanism of action for the C-seco limonoids under scrutiny.
Introduction
With the human population constantly growing, a need has arisen for the industrialization of agriculture and food production.Both cannot undergo endless optimization procedures without the use of herbicides or pesticides.The cultivation of food plants has to be conducted in new ways, so new forms of field care like insecticides can be developed.One of these alternatives is so-called "biopesticides", which constitute natural compounds obtained from living organisms that contribute to the elimination of pests.Their advantage is observed in the specificity they have, with only pests affected by the compound [1].
Recently, publications have reported the biological activity of plant compounds called "limonoids" [2][3][4][5][6][7][8][9][10][11], emphasizing their high insecticidal potency as biopesticides at the macro-and microscopic levels, which gives us the opportunity to study their possible mechanism of action at the molecular level.We assume the following mechanisms of action for the limonoid compounds in our theoretical study: Upon binding to their target receptor, the protein ecdysone 20-monooxygenase (E20MO for short), the limonoid ligands interrupt the downstream release of ecdysteroid hormones.As a direct consequence, certain biological functions of the living insects are hampered at different levels, consequently causing their death [7,10].The hitherto unknown affinities (binding energies) to the E20MO receptor can be estimated on computational grounds.So-called ligand-receptor docking studies can be carried out to simulate the molecular interaction of the limonoid ligands against their putative target protein, E20MO, which was postulated as a target prior to this study [10] and cited by [11].In acro-chemistry, as in medicinal chemistry, computing physicochemical properties or simulating biochemical reaction pathways helps us to understand the action mechanism of drugs or chemical substances like the limonoids in our in silico study.In addition, awareness about the use of biopesticides can be spread that may arise after molecular characterization for application in everyday life.Limonoid agents of type "C-seco" (acronym: LACS) belong to the secondary metabolite class of terpenoids, which can be isolated from the roots of Azadirachta indica, A. Juss.This Indian plant is regionally better known as "Neem tree" and belongs to the Meliaceae plant family, which in turn branches into more than 50 genera and more than 1400 species dispersed in tropical and sub-tropical climates throughout the world.
Results
Stable structures with local energy minima in the ground state were generated for all 3D models.A comparison of azadirachtin A (Aza) against the optimized structure was carried out using a nuclear magnetic resonance Mosher approach [5].This structure served as a scaffold for constructing the first limonoid Aza to optimize it and to obtain its groundstate structure (see Figures S1-S10 in Supplementary Information).Then, by adding the substituents and optimizing under the B3LYP protocol with the 6-311+G** basis set, the eight remaining C-seco limonoid structures were obtained as a valuable asset; the steroid scaffold of ecdysone was found in the quantum chemistry software Gaussian 16 compound repository [12].Again, the missing substituents were added, and the complete molecule was optimized under B3LYP/6-311+G** (see Figure 1).In Figure 2 the reference ligand ecdysone is shown in its active conformation attached to a heme group at the putative target binding site.
Target Modeling by Primary Sequence Homology
No PDB entry was found in the RCSB Protein Data Bank (https://www.rcsb.org/,accessed on 10 November 2023 [13]) for target Ecdysone 20-monooxygenase (EC: 1.14.99.22),AKA under its short name E20MO or alternative names CYPCCCXIVA1 or mitochondrial cytochrome P450 314a1 (CYP314A1).The protein sequence of the target enzyme E20MO was obtained from the Uniprot database (Q9VUF8) for the fruit fly (Drosophila melanogaster) prior to the BLASTp search for related 3D templates to model E20MO by homology against the Protein Data Bank (https://blast.ncbi.nlm.nih.gov/Blast.cgi?PAGE=Proteins, accessed on 19 November 2023) [14].In the next step, eight Protein Data Bank entries were preselected and retrieved (see Table S1 in Supplementary Information).Since no structure totally matched our target protein, our selection criteria were as follows: (i) high identity percentage, (ii) similar biological activity (enzyme class, oxidation by heme group) as well as (iii) chemical similarities between PDB ligands and our limonoids.
Prior to selecting the final template, an additional PDB search was carried out looking for ligands with chemical similarities to limonoids or the insect hormone ecdysone (see Table S2 in Supplementary Information).The idea was to find ligand-receptor interaction patterns at the binding sites of protein complexes with ligands that were closely related to our limonoids.[13].The complex represents aldosterone synthase (CYP11B2) with a bound tetrahydro-isoquinolinyl propenamide derivative.Its protein backbone served as a 3D template for target protein modeling by homology.In addition, it was used as a reference for docking validation by (successful) back docking of its reference ligand QHC into its experimentally determined pose in the aldosterone synthase complex (CYP11B2).In addition, the computed value is in excellent keeping with the experimental value range of affinities.Display of the liganded heme group at the binding site of reference crystal structure by xray diffraction at 3 Å resolution from RCSB Protein Data Bank [13].The complex represents aldosterone synthase (CYP11B2) with a bound tetrahydro-isoquinolinyl propenamide derivative.Its protein backbone served as a 3D template for target protein modeling by homology.In addition, it was used as a reference for docking validation by (successful) back docking of its reference ligand QHC into its experimentally determined pose in the aldosterone synthase complex (CYP11B2).In addition, the computed value is in excellent keeping with the experimental value range of affinities.
Figure 2.
Display of the liganded heme group at the binding site of reference crystal structure by X-ray diffraction at 3 Å resolution from RCSB Protein Data Bank [13].The complex represents aldosterone synthase (CYP11B2) with a bound tetrahydro-isoquinolinyl propenamide derivative.Its protein backbone served as a 3D template for target protein modeling by homology.In addition, it was used as a reference for docking validation by (successful) back docking of its reference ligand QHC into its experimentally determined pose in the aldosterone synthase complex (CYP11B2).In addition, the computed value is in excellent keeping with the experimental value range of affinities.Color code for atoms of stick models: oxygen atoms O in red, N in blue, C in beige and halogen atoms F and Cl in green.H atoms omitted and central iron atom in complex as red ball.
Another criterion was to assess the evolutionary distance in a phylogenetic tree.It was based on the primary sequences of our template candidates (see Supplementary Information Table S1 [15][16][17][18][19][20][21][22] as well as Table S2 [23][24][25]).A plethora of heme-containing PDB entries exist.So, there was a need to focus on the eight preselected template candidates, which are all closely related proteins by homology.The phylogenetic tree analysis was carried out in the MEGA 7 software [26].It can display the protein relatedness in a diagram (Figure 3).The branches (ramifications) of the tree along with the line length reflect the evolutionary distance between 3D template candidates (rightmost labels in Figure 3 are the PDB entries) and target protein (leftmost starting line or branch in Figure 3).As a direct result, only a few PDB entries had to be retrieved for inspection of their 3D structures for potential use as 3D templates for target protein model generation by homology.Statistics was applied with 1000 bootstrapping cycles [27] for a Jones-Taylor-Thornton approach [28] (see Figure 3).percentage, (ii) similar biological activity (enzyme class, oxidation by heme group) as well as (iii) chemical similarities between PDB ligands and our limonoids.
Prior to selecting the final template, an additional PDB search was carried out looking for ligands with chemical similarities to limonoids or the insect hormone ecdysone (see Table S2 in Supplementary Information).The idea was to find ligand-receptor interaction patterns at the binding sites of protein complexes with ligands that were closely related to our limonoids.
Another criterion was to assess the evolutionary distance in a phylogenetic tree.It was based on the primary sequences of our template candidates (see Supplementary Information Table S1 [15][16][17][18][19][20][21][22] as well as Table S2 [23][24][25]).A plethora of heme-containing PDB entries exist.So, there was a need to focus on the eight preselected template candidates, which are all closely related proteins by homology.The phylogenetic tree analysis was carried out in the MEGA 7 software [26].It can display the protein relatedness in a diagram (Figure 3).The branches (ramifications) of the tree along with the line length reflect the evolutionary distance between 3D template candidates (rightmost labels in Figure 3 are the PDB entries) and target protein (leftmost starting line or branch in Figure 3).As a direct result, only a few PDB entries had to be retrieved for inspection of their 3D structures for potential use as 3D templates for target protein model generation by homology.Statistics was applied with 1000 bootstrapping cycles [27] for a Jones-Taylor-Thornton approach [28] (see Figure 3).S1 and S2).S1 and S2).Label Q9VUF8 is the query sequence to relate with the PDB sequences.Since PDB entry 4ZGX is only one branching point apart from Q9VUF8 it was selected as the 3D template.It is followed by two preselected proteins (4UYL and 5EAF).All others are far more distant.The numbers indicate successful bootstrapping cycles to obtain the branching points as shown.Phylogeny reconstruction was achieved under MEGA 7 [26].
Since only the primary structure of the insect target enzyme E20MO is known to date (Uniprot id: Q9VUF8, [14]), a highly suited 3D template was chosen for target protein modeling thanks to several observations: (i) The target constitutes a heme-bearing enzyme belonging to the oxydoreductase family (EC 1.14.99.22), while the selected template protein (PDB entry 4ZGX [18]) also belongs to the oxydoreductase enzyme class (EC 1.14.15.5).The template is the human heme-bearing cytochrome P450 Cyp11B2-AKA aldosterone synthase.(ii) Its crystal structure was completely resolved, and (iii) it is not only structurally related to the target but also functionally because it catalyzes the hydroxylation step of steroid substrates in the biosynthesis of the mineralocorticoid aldosterone.
The target homology model was generated based on the Cartesian coordinates of 4ZGX using the Swiss-model software [29] (see Table S1 in Supplemental Information).It was labeled as E20MO4ZGX (see Figure 4).After the potential energy minimization, the final target crystal complex differs in its overall geometry from the template.The differences were measured as the root mean square deviation of positions between atom pairs (RMSD = 3 Å) In the next step, we added the heme group of the CYP P450 enzymes under Swiss-PDB-Viewer, applying its superposition tool Magic Fit [30].The binding cavities were inspected for possible limonoid binding and the unoccupied volumes at the heme site measured.The unoccupied cavity for E20MO4ZGX is 250 Å 3 .
Moreover, the template protein constituted the best choice with regard to structural fitness and chemical propensities to accommodate ligands (molecular weight, element formulae, volume, terpenoid likeness, etc.).The large difference in binding site volume is well known for the CYP P450 enzyme family [31,32].It undergoes induced fit processes to adopt new conformations, which enable them to accommodate a wider range of substrates for oxidation or hydroxylation reactions [33].In the following procedure, docking showed that target model E20MO4ZGX was able to accommodate even the largest ligands.In the case of 4ZGX, experimental binding data are available, which can be exploited to evaluate whether the software is capable of docking the ligand back into its binding site with the observed binding mode and pose (EC50: 7 nM-31.4µM) [18].
Molecular Simulation Details
With the 3D models of limonoid ligands and receptors at hand, the affinities were assessed through docking simulations (see Table S5 in Supplementary Information).To compare and validate the computed results, we re-docked crystal structures with ligands chemically related to ecdysone and limonoids too.The binding energy results were evaluated on a logarithmic scale to evaluate the potential affinities between limonoids and the target protein and confirm the insecticidal action mechanism at a molecular level.Thanks to the back docking studies, the reliability of our blind docking limonoids to target is fairly enhanced since the successful back docking demonstrates that molecules similar in shape and chemistry can also be expected to perform well.Concerning the chemical relatedness, prior to docking, 841 PDB hits (as of 14 November 2018) were screened for chemically related ligands in a complex with the heme-bearing enzyme by a search pattern for carbon, hydrogen and oxygen-containing small organic compounds (formula "CxHyOz").The presence of aliphatic rings and/or terpenoid-like complexity was also searched (see Table S3 in Supplementary Information) [35][36][37][38][39][40].As a direct result, it was inferred that high affinity between the limonoids and E20MO could be expected.
The different types of chemical similarities between complex ligands and target ligands (limonoids) were analyzed and documented (see Tables S1-S4 in Supplementary In- [29].At the center of its active site is the heme group for redox catalysis.The cavity remains unoccupied until docking is performed.Backbone ribbon colors: in red helices; in blue beta strands; in green loops.The heme group and adjacent residues are represented in ball and stick display.Atom colors: in cyan C, in red O, in blue N atoms.H atoms were undisplayed.Visualization using software USCF Chimera 1.16 [34].
Molecular Simulation Details
With the 3D models of limonoid ligands and receptors at hand, the affinities were assessed through docking simulations (see Table S5 in Supplementary Information).To compare and validate the computed results, we re-docked crystal structures with ligands chemically related to ecdysone and limonoids too.The binding energy results were evaluated on a logarithmic scale to evaluate the potential affinities between limonoids and the target protein and confirm the insecticidal action mechanism at a molecular level.Thanks to the back docking studies, the reliability of our blind docking limonoids to target is fairly enhanced since the successful back docking demonstrates that molecules similar in shape and chemistry can also be expected to perform well.Concerning the chemical relatedness, prior to docking, 841 PDB hits (as of 14 November 2018) were screened for chemically related ligands in a complex with the heme-bearing enzyme by a search pattern for carbon, hydrogen and oxygen-containing small organic compounds (formula "CxHyOz").The presence of aliphatic rings and/or terpenoid-like complexity was also searched (see Table S3 in Supplementary Information) [35][36][37][38][39][40].As a direct result, it was inferred that high affinity between the limonoids and E20MO could be expected.
The different types of chemical similarities between complex ligands and target ligands (limonoids) were analyzed and documented (see Tables S1-S4 in Supplementary Information).The ligands comprise the nine limonoids I-IX as well as the insect hormone ecdysone and the ligand QHC, which was extracted from its crystal structure and constitutes the 3D template (4ZGX).The binding values were obtained by blind docking for I-IX and reference ecdysone (X) against target model E20MO4ZGX.The reference ligand QHC was blind docked against target model E20MO4ZGX and back docked into the active site of its crystal structure 4ZGX [18].In total, twelve docking studies were carried out under the same program settings.At this stage, our docking simulations estimated the affinities to target E20MO (4ZGX).The computed numeric results were compared at a logarithmic scale (see Table S5 and Figure 5).The literature attests that ligand QHC (N-[(8R)-4-(4-chloro-3fluorophenyl)-5,6,7,8-tetrahydroisoquinolin-8-yl]propanamide) of 4ZGX (see Table S5 in Supplementary Information) binds with binding affinities in a nanomolar range between 7 and 31 400 (31.4 µM) [18].Our results in Table S5 show a K i value of 4 nM for the back docked QHC at the binding site of its crystal structure.This result lies in good keeping with the aforementioned affinity range (7 to 31 400 nM).Back docking validates our docking approach in a twofold manner: (i) the docked pose of reference ligand QHC, which is close to the observed X-ray pose; and (ii) the computed affinity value close to the experimental range.The Gibbs-free energy of binding (∆G) can be converted into molar concentrations by the following thermodynamic equation: ∆G = R × T × ln(K/C), where R is the universal gas constant, K the inhibition constant at equilibrium as well as T the temperature on the absolute Kelvin scale.As a crude approximation for this conversion, any difference in ∆G between two ligands corresponds approximately to a tenfold difference in their inhibitory constants, which is measured in molarity units, i.e., expressed as the wanted concentration.The molar inhibition constant values of the final poses reflected either equal or 10-to 100-fold lower binding strength of limonoids than the values obtained by back docking of the chemically related ligand QHC.Three ligands performed best, i.e., equal concentrations or amounts in comparable order were required to yield the same inhibition effect with the validated reference ligand QHC.The three were compounds IV, V, and VII.A tenfold higher concentration was required, i.e., the compounds acted in a tenfold lower order, namely compounds II, III, VI, VIII, and X.The latter is ecdysone, itself.A third group had 100-fold weaker affinity, i.e., they needed 10-to 100-times higher concentrations to cause the same enzyme-blocking response.These compounds were I and IX (see Table S5 Supplementary Information).
All ligands were displayed in the superposition of their final docked poses for eye-sight verification (see Figure 5).Moreover, the figures of individual superpositions of ligands to the target were also documented (see Figures S1-S10 in Supplementary Information).
If ligands and receptors are similar, any successful back docking supports the blind docking simulations (see Figure 6).This condition was already taken into account by one of the three selection criteria, which was to look not only for related proteins but also for similar ligands (see Section 2.1).For target E20MO4ZGX, a threefold final validation step was carried out.We reproduced the final poses and affinities of experimentally known crystal complexes, which we took from our data collection (see Tables S1-S4).To this end, three reported enzyme-ligand crystal structures were successfully docked back: (i) 4ZGX with its ligand "QHC"; (ii) 3SN5 with its ligand "cholest-4-en-3-one"; and (iii) 5FOI with its "micinamycin" ligand (see Table S4).In addition, results were found in keeping with computed scoring data for azadirachtin and 1-cinnamoylmelianolone (Limonoid V).
latter is ecdysone, itself.A third group had 100-fold weaker affinity, i.e., they needed 10to 100-times higher concentrations to cause the same enzyme-blocking response.These compounds were I and IX (see Table S5 Supplementary Information).
All ligands were displayed in the superposition of their final docked poses for eyesight verification (see Figure 5).Moreover, the figures of individual superpositions of ligands to the target were also documented (see Figures S1-S10 in Supplementary Information).If ligands and receptors are similar, any successful back docking supports the blind docking simulations (see Figure 6).This condition was already taken into account by one of the three selection criteria, which was to look not only for related proteins but also for similar ligands (see Section 2.1).For target E20MO4ZGX, a threefold final validation step was carried out.We reproduced the final poses and affinities of experimentally known crystal complexes, which we took from our data collection (see Tables S1-S4).To this end, three reported enzyme-ligand crystal structures were successfully docked back: (i) 4ZGX with its ligand "QHC"; (ii) 3SN5 with its ligand "cholest-4-en-3-one"; and (iii) 5FOI with its "micinamycin" ligand (see Table S4).In addition, results were found in keeping with computed scoring data for azadirachtin and 1-cinnamoylmelianolone (Limonoid V). Figure 6.Display of the backed docked reference ligand in superposition with its crystal pose.At the binding site only, both poses of the ligand QHC above the heme group were displayed.The central iron cation is buried in the center below thee binding nitrogen atom of the ligand.The computed pose is the most populated cluster of solutions which shows the second strongest binding affinities of all 256 predicted docking solutions.Of note the computed phenyl ring conformation is flipped around its binding axis to the scaffold by 180 degrees (leftmost ring on both ligands).The underlying bond formally constitutes a single bond but is bridging the resonance between two aromatic rings (pi electron conjugation).Since docking is based on molecular mechanics were partial charges and polarizations can only be presented by electrostatic nature, a trade-off has to be made between two options: to (i) either "freeze in" the original torsion angle between F-,Cl-phenyl ring and its scaffold aromatic ring, or (ii) define it as rotatable bond prior to docking.For unbiased validation random start position and free rotation was chosen.Color code: yellow F and Cl, magenta Catoms of ball and stick model by docking, light blue C-atoms of stick model from 4ZG.Space fill model: heme group.Atom colors: N blue, red O, pink C, white H.All hydrogen atoms on ligand were omitted.Molecular modeling software UCSF Chimera 1.16 [34].
Discussion
Although the proposed biomolecular target has never been elucidated by experimental evidence, this study sheds light on the theoretical molecular mechanism of action of natural limonoids for the putative target protein ecdysone 20-mono-oxygenase.Our findings are of a preliminary character, but the research wealth lies in our proposal to guide future research towards this putative target [7,10].New studies could include experimental assays to identify the target biomolecule, as was the case with anti-melanogenic limonoids [41].In an optimal situation, for plant extractions and the analytical identification of limonoids, their partial or total synthesis of new derivatives after molecular Figure 6.Display of the backed docked reference ligand in superposition with its crystal pose.At the binding site only, both poses of the ligand QHC above the heme group were displayed.The central iron cation is buried in the center below thee binding nitrogen atom of the ligand.The computed pose is the most populated cluster of solutions which shows the second strongest binding affinities of all 256 predicted docking solutions.Of note the computed phenyl ring conformation is flipped around its binding axis to the scaffold by 180 degrees (leftmost ring on both ligands).The underlying bond formally constitutes a single bond but is bridging the resonance between two aromatic rings (pi electron conjugation).Since docking is based on molecular mechanics were partial charges and polarizations can only be presented by electrostatic nature, a trade-off has to be made between two options: to (i) either "freeze in" the original torsion angle between F-,Cl-phenyl ring and its scaffold aromatic ring, or (ii) define it as rotatable bond prior to docking.For unbiased validation random start position and free rotation was chosen.Color code: yellow F and Cl, magenta C-atoms of ball and stick model by docking, light blue C-atoms of stick model from 4ZG.Space fill model: heme group.Atom colors: N blue, red O, pink C, white H.All hydrogen atoms on ligand were omitted.Molecular modeling software UCSF Chimera 1.16 [34].
Discussion
Although the proposed biomolecular target has never been elucidated by experimental evidence, this study sheds light on the theoretical molecular mechanism of action of natural limonoids for the putative target protein ecdysone 20-mono-oxygenase.Our findings are of a preliminary character, but the research wealth lies in our proposal to guide future research towards this putative target [7,10].New studies could include experimental assays to identify the target biomolecule, as was the case with anti-melanogenic limonoids [41].In an optimal situation, for plant extractions and the analytical identification of limonoids, their partial or total synthesis of new derivatives after molecular design and biological activity assays could be combined, as reported in a study for berberine chloride [42].Limonoids, as other essential oil ingredients, possess a vast applicability range for industrial food production as well as industrial agriculture to developed sustainable agrochemicals, all of which lends them a dual function as a nontoxic food additive for humans and as insecticide agents [6,43].Their potential assets have been outlined in the literature recently [44,45].In particular, in the context of environmental toxicity, biodegradable limonoids are less harmful than older synthetic pesticides, which persist as pesticide residues in crops in fields or are washed into surface water and end up contaminating drinking water or food plants and fruits, all of which can be detected in pesticide residue analyses [46,47].
Materials and Methods
The three-dimensional structures of our studied LACS were generated applying ab initio optimization of their geometries, solving the Schrödinger equation by the approximation of atomic orbitals.Electronic charge calculations were performed using the RHF method with an STO-3G base to explore atomic reactivities in the molecules (see Figure 1).We optimized the molecular structures of our limonoids and reference molecule ecdysone from methods called ab initio using the Gaussian 16 & GaussView6 package [12].In a subsequent step, Aza was chosen as a general scaffold to formulate the Z matrices under the ChemSpider Web page (job ID: 4444685) for later use as input data for Gaussian 16 [48].Precisely, the Z-matrix of Aza was automatically created, then optimized with base RHF/STO-3G [49], B3LYP/6-31+G** and B3LYP/6-311+G** [50].Once the calculations were completed, a minimum energy structure was obtained in the ground state.The final molecular geometry of Aza was decorated with substituents of each ligand to obtain the structures of all eight C-seco limonoids.Then, they were optimized with base B3LYP/6-311+G** to obtain our eight 3D models, as outlined in the review published by Qin-Gang Tan et al. [11].Furthermore, the structure of our reference ligand ecdysone was generated starting from its steroid scaffold, which was found in Gaussian 16 with its built-in parameter set.The structure was optimized with the base B3LYP/6-311+G** [12].
Due to the absence of a three-dimensional structure of the target biomolecule in the PDB database [13], a three-dimensional model of the ecdysone 20-monooxygenase enzyme was needed.To generate this 3D model of the hitherto unknown E20MO target by protein homology, we searched for related structural templates in the PDB database [13].To this end, potential 3D templates were retrieved from www.rcsb.org(accessed on 19 November 2023).We evaluated the similarities between potential 3D templates, applying multiple sequence alignment techniques (MSA) under PSI-BLAST [15,51] against the PDB databank (www.rcsb.org,accessed on 19 November 2023).Details about the homology protein modeling technique have been described elsewhere [52].
The following evaluation criteria were considered: (i) overall identity between template and target sequences, (ii) homology of conserved amino acids between template and target sequences, (iii) similarities between observed ligands at the binding site, (iv) binding modes at the binding site, (v) biological activities, and (vi) evolutionary distance.To this end, the outcome was listed with structurally known liganded members of the mono-oxygenase protein.
For the next step, the selected 3D template (PDB code 4ZGX [18]) contained the required heme group (see Figure 2).It was sent as input for the web-based 3D tem-plate modeling by Swiss-model (www.swissmodel.expasy.org,accessed on 19 November 2023) [29].
Finally, molecular docking simulations were performed between the nine limonoidlike ligands and the target proteins.To this end, the molecular interactions between the ligands and the modeled proteins were assessed under Autodock Tools using Autogrid4 and Autodock4 extensions [30,53].The liganded complex of the selected 3D template was self-docked as a proof of concept, also known as the back docking test, with a known reference [53,54].
As an alternative target-free approach, we discarded the study by quantitative structure-activity relationships (QSARs).With only very few tested molecules in the compound series, a reliable QSAR model cannot be established concerning any beneficial or toxic agents [52].To this regard, a promising computational approach was recently published describing a grid-based computational method, which was applied to eleven organophosphate compounds in use as agricultural pesticides [55].
Conclusions
The food and agrochemical industries have focused on essential oil limonoids.Our in silico study contributes with molecular mechanism insight through theoretical results corroborating the empirical findings.They describe the insecticidal activity of the ring C-type limonoids by inhibiting the hormone release upon binding at the active site of the reported target ecdysone 20-mono-oxygenase.Precisely, our ring C-type limonoids belong to the natural compounds of the formula type CxHyOz.While a larger portion of the ligand scaffold constitutes nonpolar aliphatic chains, the oxygen atoms and the aryl rings account for the negative charge density patches on the molecules' surface, which lead to an attractive hydrogen-bonding network with binding-site residues.The target protein was generated by homology modeling, and the heme group merged into the cavity after superposition with the (heme-bearing) 3D template.Docking simulations revealed high target affinities of the limonoid insecticides, in close range to the insect hormone ecdysone and back docked ligand of the reference complex.The back docking validation test yielded not only the same pose but also the same affinity level as assessed by experimental data.With a validated procedure at hand, as a direct result, the strongest binding poses of all ligands were taken and their final docked poses compared after the superposition operation.Each ligand-receptor complex was analyzed for the interaction pattern and graphically documented (see Figures S1-S10 in Supplementary Information).All told, it seems not far-fetched to assume that our limonoids are strong binders to the E20MO enzyme as their biomolecular target for pesticide action.However, these findings have only a preliminary character since ecdysone 20-mono-oxygenase was never experimentally assessed as a target in the literature, which qualifies it as a mere putative target.
Supplementary Materials:
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/molecules29071628/s1; Figure S1 S1: Listing of target protein E20MO; Table S2: Listing of three hits among heme-containing enzymes; Table S3: Ligands chemically related to limonoids; Table S4: Reference molecule ecdysone and related PDB ligands; Table S5: Listing of maximum and minimum free energies from docking.
Funding:
The APC was partly funded by VIEP of BUAP in 2024.
Figure 1 .
Figure 1.Structures of the nine ring C-seco limonoids and their identity labels.
Figure 2 .
Figure 2. Display of the liganded heme group at the binding site of reference crystal structure by xray diffraction at 3 Å resolution from RCSB Protein Data Bank[13].The complex represents aldosterone synthase (CYP11B2) with a bound tetrahydro-isoquinolinyl propenamide derivative.Its protein backbone served as a 3D template for target protein modeling by homology.In addition, it was used as a reference for docking validation by (successful) back docking of its reference ligand QHC into its experimentally determined pose in the aldosterone synthase complex (CYP11B2).In addition, the computed value is in excellent keeping with the experimental value range of affinities.
Figure 1 . 13 Figure 1 .
Figure 1.Structures of the nine ring C-seco limonoids and their identity labels.
Figure 2 .
Figure 2. Display of the liganded heme group at the binding site of reference crystal structure by xray diffraction at 3 Å resolution from RCSB Protein Data Bank[13].The complex represents aldosterone synthase (CYP11B2) with a bound tetrahydro-isoquinolinyl propenamide derivative.Its protein backbone served as a 3D template for target protein modeling by homology.In addition, it was used as a reference for docking validation by (successful) back docking of its reference ligand QHC into its experimentally determined pose in the aldosterone synthase complex (CYP11B2).In addition, the computed value is in excellent keeping with the experimental value range of affinities.
Figure 3 .
Figure 3. Phylogenetic tree diagram for potential 3D templates to model the target structure.The rightmost labels display the PDB entry codes (see Supplementary Information TablesS1 and S2). Figure 3. Phylogenetic tree diagram for potential 3D templates to model the target structure.The rightmost labels display the PDB entry codes (see Supplementary Information TablesS1 and S2).Label Q9VUF8 is the query sequence to relate with the PDB sequences.Since PDB entry 4ZGX is only one branching point apart from Q9VUF8 it was selected as the 3D template.It is followed by two preselected proteins (4UYL and 5EAF).All others are far more distant.The numbers indicate successful bootstrapping cycles to obtain the branching points as shown.Phylogeny reconstruction was achieved under MEGA 7[26].
Figure 3 .
Figure 3. Phylogenetic tree diagram for potential 3D templates to model the target structure.The rightmost labels display the PDB entry codes (see Supplementary Information TablesS1 and S2). Figure 3. Phylogenetic tree diagram for potential 3D templates to model the target structure.The rightmost labels display the PDB entry codes (see Supplementary Information TablesS1 and S2).Label Q9VUF8 is the query sequence to relate with the PDB sequences.Since PDB entry 4ZGX is only one branching point apart from Q9VUF8 it was selected as the 3D template.It is followed by two preselected proteins (4UYL and 5EAF).All others are far more distant.The numbers indicate successful bootstrapping cycles to obtain the branching points as shown.Phylogeny reconstruction was achieved under MEGA 7[26].
Figure 4 .
Figure 4. Display of the 3D model E20MO4ZGX.It was generated on the backbone coordinates of 3D template 4ZGX as input under Swiss-model software[29].At the center of its active site is the heme group for redox catalysis.The cavity remains unoccupied until docking is performed.Backbone ribbon colors: in red helices; in blue beta strands; in green loops.The heme group and adjacent residues are represented in ball and stick display.Atom colors: in cyan C, in red O, in blue N atoms.H atoms were undisplayed.Visualization using software USCF Chimera 1.16[34].
Figure 4 .
Figure 4. Display of the 3D model E20MO4ZGX.It was generated on the backbone coordinates of 3D template 4ZGX as input under Swiss-model software[29].At the center of its active site is the heme group for redox catalysis.The cavity remains unoccupied until docking is performed.Backbone ribbon colors: in red helices; in blue beta strands; in green loops.The heme group and adjacent residues are represented in ball and stick display.Atom colors: in cyan C, in red O, in blue N atoms.H atoms were undisplayed.Visualization using software USCF Chimera 1.16[34].
Figure 5 .
Figure 5. Display of target model E20MO with the docking poses of the three strongest binders in superposition.Left side panel: ribbon display of the backbone of ecdysone 20-mono-oxygenase target in steel blue.Space fill model of the active site heme group (bottom center).Richt side panel: magnified view on the liganded heme group with docked poses of compound IV (top).compound V (center) and compound VII (bottom).Hydrogen atoms omitted.Atom color code: red O, blue N, carbon atoms of compound IV, V or VI in sky blue, magenta or goldenrod, respectively.Model generated on docking output spatial coordinates by molecular modeling software UCSF Chimera 1.16 [34].
Figure 5 .
Figure 5. Display of target model E20MO with the docking poses of the three strongest binders in superposition.Left side panel: ribbon display of the backbone of ecdysone 20-mono-oxygenase target in steel blue.Space fill model of the active site heme group (bottom center).Richt side panel: magnified view on the liganded heme group with docked poses of compound IV (top).compound V (center) and compound VII (bottom).Hydrogen atoms omitted.Atom color code: red O, blue N, carbon atoms of compound IV, V or VI in sky blue, magenta or goldenrod, respectively.Model generated on docking output spatial coordinates by molecular modeling software UCSF Chimera 1.16 [34]. | 8,225 | 2024-04-01T00:00:00.000 | [
"Chemistry",
"Biology",
"Environmental Science"
] |
KC21 Peptide Inhibits Angiogenesis and Attenuates Hypoxia-Induced Retinopathy
Desmogleins (Dsg2) are the major components of desmosomes. Dsg2 has five extracellular tandem cadherin domains (EC1-EC5) for cell-cell interaction. We had previously confirmed the Dsg2 antibody and its epitope (named KC21) derived from EC2 domain suppressing epithelial-mesenchymal transition and invasion in human cancer cell lines. Here, we screened six peptide fragments derived from EC2 domain and found that KR20, the parental peptide of KC21, was the most potent one on suppressing endothelial colony-forming cell (ECFC) tube-like structure formation. KC21 peptide also attenuated migration but did not disrupt viability and proliferation of ECFCs, consistent with the function to inhibit VEGF-mediated activation of p38 MAPK but not AKT and ERK. Animal studies showed that KC21 peptides suppressed capillary growth in Matrigel implant assay and inhibited oxygen-induced retinal neovascularization. The effects were comparable to bevacizumab (Bev). In conclusion, KC21 peptide is an angiogenic inhibitor potentially useful for treating angiogenesis-related diseases. Electronic supplementary material The online version of this article (10.1007/s12265-019-09865-6) contains supplementary material, which is available to authorized users.
Introduction
Desmosomes provide strong adhesion to maintain tissue function and organ architecture. Organs that frequently experience mechanical stress, such as the skin and heart, particularly express abundant desmosomes to provide plasma membrane attachment sites for adjacent cells [1]. Desmosomes are adhesive intercellular junctions comprising two cadherin proteins, desmogleins (Dsg) and desmocollins [2]. Human genome encodes four desmogleins (Dsg1-4) which are single-pass transmembrane proteins with five extracellularly tandem conserved cadherin domains (EC1-EC5) and an intracellular domain that bind to intermediate filaments via adaptor proteins, desmoplakin and plakoglobin [1]. Intercellular junctions of cadherin binding sites are composed of EC1 domains revealed by electron tomography studies of native desmosomes [3,4]. The specificity of adhesion had been confirmed by function-blocking peptides derived from EC1 domain [5]. Differentially proteolytic cleavage fragments containing EC domains had been determined in human cancer lines [6]. Clinically, shedding of Dsg2 extracellular domains are detected in patients with ulcerative colitis [7]. Mutations of Dsg2 are detected in patients with arrhythmogenic right ventricular cardiomyopathy (ARVC) [8], and expression of Dsg2 is increased in several epithelialderived malignancies including basal-cell carcinomas, squamous cell carcinomas, and metastatic prostate cancer [9][10][11]. These studies show the importance of Dsg2 homeostasis for the regulation of signaling in cell proliferation, migration, and epithelial-mesenchymal transition (EMT).
The therapeutic potential of endothelial progenitor cells (EPCs) has gained great interest since the observations that a significant number decrease of circulating EPCs was detected in patients with severe conditions, such as diabetes and repeated hospitalization for heart attacks [12]. EPCs isolated from peripheral bloods consistently produce two distant subtypes which had been named as early EPCs and endothelial colonyforming cells (ECFCs), also called late EPCs for their late appearance in culture. Early EPCs, which produce paracrine factors, have limited culturing passages, and ECFCs, which directly incorporate into vasculature, have a strong growth capacity. Intramuscular injection of human ECFCs rescues blood perfusion of hindlimb ischemic mice [13] that provides rationale for clinical trials using ECFC infusion as ischemic cardiovascular disease therapy [14].
Previously, we had identified the antagonist role of Dsg2 on cancer metastasis [15]. Polyclonal Dsg2 antibody and the immunogenic epitope derived from EC2 domain suppress EMT and invasion of human melanoma, breast cancer, and prostate cancer cells, consistent with the observation that Dsg2 exhibits a non-adhesive function for cell migration and morphogenesis [1,5,6]. Here, we use Dsg2 antibody and its immunogeic peptide KC21 to test their effects on the control of vessel overgrowth in vivo and to screen the candidates involved in Dsg2-mediated ECFC angiogenesis.
Cell Viability and Proliferation Analysis
Cell viability was measured using the cell counting kit-8 (CCK-8) (Sigma-Aldrich) to reflect the dehydrogenase activity of living cells. ECFCs were seeded onto 96-well plates and treated with Dsg2-derived peptides (100, 200, and 400 μM). Twenty-four hours later, CCK-8 solutions were added to each well for 4 h, and the medium was harvested for the measurement of absorbance at 450 nm using a microplate reader. For cell proliferation assay, ECFCs were treated with Dsg2derived peptide (100, 200, and 400 μM) for 4 h and then fixed. Cells labeled with 5-bromo-2′-deoxyuridine (BrdU) were subsequently identified with a primary antibody against BrdU and visualized with a secondary antibody conjugated with horseradish peroxidase using tetramethylbenzidine as a substrate.
Immunohistochemistry
Immunostaining of cell cultures was described previously [13]. Antibodies used and dilution ratio were rabbit antiDsg2
Zymography Assay
ECFCs were seeded at 80% confluence on 60-mm dishes in MV2 complete medium. Next day, cells were incubated in MV2 medium with 2% FBS with various concentrations of KC21 peptides for 24 h. Fifty μg of the conditioned medium was analyzed by 10% zymogram gel containing 0.1% gelatin [17]. After electrophoresis, the gels were washed and incubated at 37°C for 24 h. The gels were stained with Coomassie Blue to visualize proteinase activity. The digested area appeared clear over a blue background, indicating the location of matrix metalloproteinase 2 and 9 (MMP2 and 9) activity. MMP3 activity was determined by casein zymography [18].
PAI-1 Activity Assay
ECFCs were treated with KC21 or scramble peptides for 24 h. Conditioned media were harvested for PAI-1 activity determination. PAI activity assay kit (Chemicon) utilized a chromogenic substrate cleaved by active uPA and detected by its optical density at 405 nm. Addition of PAI-1 in conditioned media blocked the cleavage of substrate by uPA. The relative PAI-1 activity was obtained by plotting with the standard curve of 10 units of uPA inhibited by a series dilution of PAI-1 incubated at 37°C for 2 h as the assay instructions described.
Western Blot
ECFCs were lysed with SB-20 buffer (0.2 g/mL SDS, 10 mM EDTA, 100 mM Tris-HCl, pH 6.8), and protein concentrations were determined by modified Lowry's method. Aliquots of cell lysates were loaded into 10% SDSpolyacrylamide gels, electrophoresed, and transblotted onto polyvinylidene fluoride membranes (Millipore). The blots were blocked with 10% bovine serum albumin for 1 h and probed with indicated primary antibody for two hours. The blots were further incubated with alkaline phosphataseconjugated secondary antibodies for 1 hour at a room temperature. Immunoreactivity was visualized using CDP-star system (Roche) according to the manufacturer's instruction. Primary antibody for Dsg2 (Santa Cruz), p38, p-p38, Akt, p-Akt, ERK, p-ERK, MMP9, and PAI-1 (Cell signaling) were diluted with PBS in 1 to 1000.
Wound Healing Assay
ECFCs were grown on twenty-four-well plates to reach confluence. Cell-free gap was generated using SPLScar™ Block (0.5 mm width, #201905, SPL Life Sciences, Korea) and photographed by optical microscopy (Leica, Germany) at × 40 magnification as basal line. After 4-h culture, cells were fixed and imagined to measure new growth areas using Image-J software (NIH). The ratio of the new migration area was calculated relative to the initial wound area and normalized to that for PBS-treated cells as described [19].
Matrigel Tube Formation Assay and Quantification
Growth factor-reduced Matrigel (BD Biosciences) was thawed at 4°C before use. Twenty-four-well plates were coated with Matrigel (200 μL/well) and polymerized for 30 min at 37°C. ECFCs resuspended with various concentrations of KC21 peptides, scramble peptides, and antiDsg2 antibody (10 ng/ml or bev (0.25 μg/mL)) were seeded on Matrigelcoated wells at a density of 5 × 10 4 cells in MV2 medium containing 2% FBS for 24 h at 37°C in a 5% CO 2 humidified incubator. Each sample was tested in triplicate on the same plate, and wells were photographed with a Leica microscope with camera (× 40 magnification). Five fields were randomly chosen in each well to measure tube length and junction number manually using Image-Pro Plus 6.0 (Rockville, MD). Total tube length and junction number per field were calculated.
Animal Experiments
All animal experiments were approved by the Institutional Animal Care and Use Committee of the Mackay Memorial Hospital. C57BL/6 mice were kept and bred in accordance with the institutional ethical committee guidance (approval number: MMH-A-S-105-67).
Matrigel Plug Assay
To assess the antiangiogenic effects of KC21 peptides in vivo, growth factor-reduced liquid Matrigel (0.5 mL) containing heparin (60 U/mL), VEGF (10 ng/mL, with the exception of control), and KC21 peptides or scramble peptides were subcutaneously injected into the mice near the abdominal midline. Seven days after injection, mice were euthanized and Matrigel plugs were surgically removed. For macroscopic analysis of angiogenesis, hemoglobin content in Matrigel was measured with Drabkin's reagent kit 525 (Sigma-Aldrich).
Oxygen-Induced Retinopathy Assay
Retinal neovascularization was induced by the use of a well-established murine model of oxygen-induced retinopathy [20]. Neonatal mouse (C57BL/6) pups at postnatal day 7 (P7) with their nursing mothers were maintained for 5 days in 75% oxygen and then returned to room air (relative hypoxia) to produce retinal neovascularization at P12. PBS, scramble, KC21 peptides (25 μg), or Bev (10 µg) were then administered by intravitreal injection into mouse eyes at P12. The animals were sacrificed and the mouse eyes were enucleated at P17. Mouse eye cups were fixed in 4% paraformaldehyde for 2 h. The retinas were carefully separated from eye cups and then incubated with fluorescein-labeled isolectin-B4 (Life technologies) at 4°C overnight. Samples were mounted with Vectashield medium (Vector Laboratories), and the isolectin labeling was examined by using the × 20 objective of a Leica TCS SP5 confocal microscope. Fluorescence volume measurements were recorded by creating image stacks of optical slices within lesions with QWIN software.
Characterization of Peptides Derived from EC2 Domain of Dsg2 for Suppressing ECFC Tube Formation
We had identified the antagonist role of EC2 domain in suppressing EMT and invasion of human cancer cells [15] . In this study, we used ECFCs to test the effects of Dsg2-derived peptides on angiogenesis. At first, human EPCs were harvested from the peripheral blood mononuclear cells (PBMCs) of healthy donors and characterized with CD34 + KDR + AC133 + CD31 + [21]. The fractions of defined ECFCs were determined by flow cytometry (Fig. S1).
Matrigel tube formation assay, KR20 is the most potent peptide in inhibiting human ECFC angiogenic potential with regard to the decrease of average tube length and junction number (Fig. 1c). The KR20 peptide was modified with a cysteine residue at C-terminus which is required for carrier protein coupling and exactly the same sequence for Dsg2 antibody generation previously [15]. We named the epitope sequence as KC21 (Fig. 1b). Both KC21 and Dsg2 antibody profoundly inhibit ECFC tube formation (Fig. S2). The comparison of KC21 with its parental peptide KR20 on suppressing ECFC tube formation shows similar effects, suggesting one cysteine residue modification does not affect the antiangiogenic activity (Fig. S3).
KC21 Peptides Specifically Inhibit ECFC Tube Formation but Not Viability and Proliferation
At first, we measured the effects of KC21 peptides on inhibiting ECFC angiogenic potential, showing in a dosedependent manner. KC21 is with similar potency to its parental peptide KR20 ( Fig. 2a and S3). Effects of the peptides on ECFC tube formation are bio-comparable, as they do not change ECFC viability and proliferation rate determined by cell counting kit-8 (CCK8) assay and BrdU incorporation assay, respectively (Fig. 2b, c).
KC21 Peptides Do Not Co-localize with and Decrease the Level of Dsg2
We used Madin-Darby Canine Kidney (MDCK) cells, an epithelial cell line well known for the expression of functional desmosomes, to detect Dsg2 expression pattern. As shown in Fig. 3a, Dsg2 was co-localized with plakoglobin (PKGB), a Dsg2-associated protein at the periphery with a clear cell-cell boundary. The expression of Dsg2 in MDCK cells and ECFCs were also examined by flow cytometry (Fig. S4).
To test whether KC21 interacts with Dsg2 in vivo, the N-terminus of KC21 was conjugated with FITC (indicated as FITC-KC21, green) to track its cellular distribution. FITC-KC21 was taken up by MDCK cells within 30 min after treatment (Fig. 3b, left column). Dsg2 strongly decorated cell periphery (Fig. 3b, red), however, there was no co-localization of Dsg2 with KC21. Of note, the level of Dsg2 was not affected by KC21 treatment, as shown in western blot assay, the
Fig. 2 KC21 peptides inhibit ECFC tube formation but not viability and proliferation
In a, representative images of ECFC tube-like structure in Matrigel culture. ECFCs were treated with indicated peptides for 16 hours. Right, quantification of total ECFC tube length and junction number per field. **p < 0.001, compared with PBS treated cells. Scale bar, 300 μm. b ECFCs were cultured with a series of KC21 or scramble peptides (400 μM) for 16 h and viability was measured by CC8 kit. c ECFCs were treated with a series of KC21 or scramble peptides (400 μM) for 24 h and proliferation rate was compared by BrdU incorporation assay. protein levels of Dsg2 were not changed in ECFCs treated with different concentrations of KC21 (Fig. S5).
KC21 Peptides Inhibit ECFC Migration and VEGF-Induced p38 Kinase Activation
As KC21 peptides show an antiangiogenic potential, we next asked whether KC21 inhibits ECFC migration, the prerequisite step of angiogenesis. As shown in Fig. 4a, the migrating activity of ECFCs is decreased after KC21 peptides treatment in wound healing assay.
KC21 Peptides Inhibit VEGF-Induced Capillary Growth and Decrease MMP9 Activity
Since KC21 inhibits ECFC tube-like structure formation in vitro, we tested whether it inhibits VEGF-induced angiogenesis in vivo by subcutaneous Matrigel plugs assay. VEGF (10 ng/ mL) profoundly induced capillary growth in plugs containing PBS and scramble peptides, while the induction was significantly abolished by KC21 peptides (Fig. 5a). The content of hemoglobin, an indicator of infiltrating erythrocytes, was markedly decreased in KC21-containing plugs (Fig. 5a, bar chart). The Matrigel plugs were sectioned and stained with CD31 (PECAM1) to detect capillaries and infiltrated cells (Fig. 5b). KC21 profoundly inhibits the angiogenic effects of VEGF. Digestion of the extracellular matrix is the initial step of angiogenesis. Therefore, we further tested the effects of KC21 on angiogenic MMPs in ECFC culture. The results showed that KC21 peptides specifically inhibit the activities of extracellular MMP9, but not MMP2 or MMP3 (Fig. 5c, e). The intracellular MMP9 protein levels remain unchanged in ECFCs with KC21 treatment (Fig. 5d), suggesting KC21 regulates the activity instead of the amount of MMP9.
It has been proposed that activation of MMP9 is regulated by plasminogen/plasmin fibrinolytic system, controlled by plasminogen activator inhibitor-1 (PAI-1) [30]. We tested the effects of KC21 peptides in regulating the activity and cellular level of PAI-1. As shown in Fig. 5f, the level and activity of PAI-1were increased in ECFCs with KC21 peptide treatment.
KC21 Peptides Suppress ECFC Angiogenesis and Inhibit Hyperoxia-Induced Retinal Neovascularization
In order to evaluate the antiangiogenic effect of KC21 on pathogenic neovascularization, we compared the treatments of KC21 with Bev, an FDA-approved therapeutic antibody for AMD. As shown in Fig. 6a, KC21 peptides inhibit ECFC tube formation, while bev has no effects. Furthermore, we used a well-established mouse oxygeninduced retinopathy (OIR) model to test the therapeutic effects of KC21 peptides on neovascularization. In normoxia condition, limited antiangiogenic effects are observed in mice with either KC21 (400 μM in 1 μl volume) or Bev (20 ng in 1 μl volume) treatment (Fig. 6b, upper 4 panels), while in hyperoxia condition, the induction of neovascularization is blunt by bev intravitreal injection (Fig. 6b, lower 4 panels). KC21 injection markedly inhibited retina neovascularization, indicated by the intensity of endothelial cells stained with isolectin-B4.
Discussion
Our study showed that the designed KC21 peptides have the ability to inhibit human ECFC migration in vitro, to reduce VEGF-induced capillary growth and to suppress oxygen- Fig. 4 KC21 peptides inhibit ECFC migration and VEGFinduced p38 kinase activation a Representative images (left) and quantification results of wound healing assay. Gaps were formed (cyan) after inserted plugs removed from confluent ECFCs. The cells were then treated with scramble or KC21 peptides (400 μM for each) for 4 h. New growth areas after 4-hour culture were colored magenta and quantified. **p < 0.001. b Western blot of protein kinases induced by VEGF (10 ng/ml) in ECFCs treated with scramble or KC21 peptides (400 μM) for 24 hours and quantification results (right). The intensity ratio of phosphorylated kinase versus non-phosphorylated kinase was normalized with the intensity of loading control, α-Tubulin. Each group was compared with PBS group which was set as 100%. Values are mean ± SD of triplicate assays from 3 independent experiments. *p < 0.05; **p < 0.001, compared with PBStreated cells.
induced retinal neovascularization in vivo. At the cellular level, KC21 peptides specifically attenuate VEGF-induced activation of p38 MAPK, but not the signaling targets of Akt and ERK. Also, KC21 peptides regulate MMP9 and PAI-1 to stabilize the extracellular matrix of ECFCs.
The functions of Dsg2 on angiogenesis have gained attention recently. Depletion of Dsg2 by siRNA impaired tube-like structure formation in MVECs [31]. These findings suggested that Dsg2 is a molecular target in regulating angiogenesis. Consistently, Dsg2 antibody profoundly inhibits tube-like (blue). c Gelatin zymography assay for MMP2 and MMP9 (upper) and quantification results of MMP9 activity (lower). d Western blot of MMP9 levels in harvested ECFCs and quantification results (lower). GAPDH is for a loading control. e Casein zymography assay for MMP3 activity (upper) and quantification results (lower). f Western blot of PAI-1 in conditioned media of ECFCs (upper) and quantification results (lower). Ponceau S staining is a loading control alternative to GAPDH as described [39]. Lower right, ECFC conditioned media were harvested and subjected to PAI-1 activity measurement. Values are mean ± SD of triplicate assays from 3 independent experiments. *p < 0.05 compared with PBS (untreated control) cells. structure formation (Fig. S2). The effects of Dsg2 antibody and KC21 on ECFC angiogenesis inhibition are comparable to the application of Dsg2 antibody and EC2 domain peptides on disturbing intercellular barriers in human colon carcinoma (Caco) enterocytes [32]. Our previous study also showed that Dsg2 antibody and the immunogenic epitope derived from EC2 domain suppress EMT and metastasis of human cancer cell lines [15]. As EC2 domain is not involved in Dsg2 homotypic interaction and Dsg2 is ubiquitously distributed in ECFCs, the effects of KC21 and Dsg2 antibody on suppressing ECFC angiogenesis and migration support the observation that Dsg2 exhibits a non-adhesive function to regulate cell migration and tissue morphogenesis [1,5,6]. Our results suggest that KC21 peptides may attenuate ECFC migration through inhibiting VEGF-mediated p38 activation (Fig. 4), consistent with the finding that p38 MAPK mediates VEGFinduced migration in HUVECs [33].
Proteolysis of extracellular matrix is an initiation step for the recruitment of endothelial progenitor cells to establish new capillaries. Matrix metalloproteinases (MMPs) are extracellular endopeptidases selectively degrading components of the extracellular matrix. Shedding Dsg2 ectodomains by MMPs had been detected in the inflamed intestinal mucosa of mice with colitis and patients with ulcerative colitis [7]. In this study, KC21 peptides specifically inhibit the activity of MMP9 but not MMP2 and MMP3 (Fig. 5c, d), consistent with the results that ectodomains of Dsg2 are substrates of MMP 9 but not of the other MMPs [7]. As MMP9 activity is regulated by plasmin and inhibited by PAI-1, increase of PAI-1 cellular level and activity by KC21 peptides may further inhibit MMP9 activity (Fig. 5e).
The therapeutic functions of mature endothelial cells on ischemic diseases have been tested in hindlimb ischemic animal model [34]. Both KC21 and bev strongly suppress angiogenesis in mature endothelial cells including human aortic endothelial cells (HAECs) and human umbilical vein endothelial cells (HUVECs) (Fig. S6). However, to our surprise, KC21 profoundly inhibits ECFC tube formation (Fig. 6a), while bev has no such effect. As both mature ECs and ECFCs contribute to the progression of pathogenic angiogenesis [35][36][37], KC21 might be a more potent peptide drug than bev on ocular neovascularization control.
To our knowledge, this study is the first report showing evidences for Dsg2-derived peptides to suppress retinal neovascularization in the mouse oxygen-induced-retinopathy (OIR) model. Neovascularization causes ocular vessel leakiness, edema, retinal detachment, and even blindness. VEGF is a major hypoxia-induced angiogenic factor and is found to be increased in the vitreous and retina to exacerbate retinopathy. Clinically, therapeutic agents against VEGF, such as bev and aflibercept, have been widely used to control the overgrowth of retinal blood vessels for vision rescue. However, more than 30% of the patients do not respond to these therapies and adverse events were also reported [38]. The finding in this study that the inhibitory effects of KC21 peptides on neovascularization in the mouse OIR model is comparable to that of Bev (Fig.6b) provides a new potential target for developing alternative or combined therapeutic options for retinal vascular diseases. Abbreviations Dsg, desmoglein; Bev, bevacizumab; OIR, oxygeninduced retinopathy; AMD, age-related macular degeneration; VEGF, vascular endothelial growth factor; MMPs, matrix metalloproteinases; PAI-1, plasminogen activator inhibitor 1; EPCs, endothelial progenitor cells; ECFCs, endothelial colony forming cells; MVECs, microvascular endothelial cells; ECs, endothelial cells; CCK-8, the cell counting kit 8; PBS, phosphate buffer saline Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 4,877.6 | 2019-02-21T00:00:00.000 | [
"Biology"
] |
Identification of an emerging cucumber virus in Taiwan using Oxford nanopore sequencing technology
Background In June 2020, severe symptoms of leaf mosaic and fruit malformation were observed on greenhouse-grown cucumber plants in Xizhou Township of Changhua County, Taiwan. An unknown virus, designated CX-2, was isolated from a diseased cucumber sample by single lesion isolation on Chenopodium quinoa leaves. Identification of CX-2 was performed. Moreover, the incidence of cucumber viruses in Taiwan was also investigated. Methods Transmission electron microscopy was performed to examine virion morphology. The portable MinION sequencer released by Oxford Nanopore Technologies was used to detect viral sequences in dsRNA of CX-2-infected leaf tissue. The whole genome sequence of CX-2 was completed by Sanger sequencing and analyzed. Reverse transcription-polymerase chain reaction (RT-PCR) with species-specific primers and indirect enzyme-linked immunosorbent assay (ELISA) with anti-coat protein antisera were developed for virus detection in the field [see Additional file 1]. Results Icosahedral particles about 30 nm in diameter were observed in the crud leaf sap of CX-2-infected C. quinoa plant. The complete genome sequence of CX-2 was determined as 4577 nt long and shared 97.0–97.2% of nucleotide identity with that of two cucumber Bulgarian latent virus (CBLV) isolates in Iran and Bulgaria. Therefore, CX-2 was renamed CBLV-TW. In 2020–2022 field surveys, melon yellow spot virus (MYSV) had the highest detection rate of 74.7%, followed by cucurbit chlorotic yellows virus (CCYV) (32.0%), papaya ringspot virus virus watermelon type (PRSV-W) (10.7%), squash leaf curl Philippines virus (SLCuPV) (9.3%), CBLV (8.0%) and watermelon silver mottle virus (WSMoV) (4.0%). Co-infection of CBLV and MYSV could be detected in field cucumbers. Conclusion The emerging CBLV-TW was identified by nanopore sequencing. Whole genome sequence analysis revealed that CBLV-TW is closely related, but phylogenetically distinct, to two known CBLV isolates in Bulgaria and Iran. Detection methods including RT-PCR and indirect ELISA have been developed to detect CBLV and to investigate cucumber viruses in central Taiwan. The 2020–2022 field survey results showed that MYSV and CCYV were the main threats to cucumbers, with CBLV, SLCuPV and WSMoV were occasionally occurring. Supplementary Information The online version contains supplementary material available at 10.1186/s13007-022-00976-x.
cucurbit species as important vegetable crops and cultivated worldwide [1]. According to the statistics of Food and Agriculture Organization of the United Nations (http:// www. fao. org/ faost at/ en/# data/ QC), global productions of the major cucurbit crops were near 450 megatonnes (Mt) in 2020. Cucurbit crops are also economically important in Taiwan with the planting area of more than 20,000 ha and an annual output value of about US$ 250 million (Agriculture and Food Agency, Council of Agriculture, Executive Yuan, 2020, https:// agrst at. coa. gov. tw/ sdweb/ public/ inqui ry/ Inqui reAdv ance. aspx). Cultivation of cucurbit crops is always challenged by pathogenic microorganisms, especially viruses [2]. Numerous viruses infecting cucurbit crops have been reported around the world, of which squash leaf curl Philippines virus (SLCuPV) of the genus Begomovirus [3], cucurbit chlorotic yellows virus (CCYV) of the genus Crinivirus [4], zucchini yellow mosaic virus (ZYMV) [5] and papaya ringspot virus watermelon type (PRSV-W) [6] of the genus Potyvirus, and melon yellow spot virus (MYSV) [7] and watermelon silver mottle virus (WSMoV) [8] of the genus Orthotospovirus are the most prevalent viruses in Taiwan in the last decade. The viruses can be efficiently spread by tiny insects, CCYV and SLCuPV are transmitted by whiteflies; ZYMV and PRSV-W are transmitted by aphids; and MYSV and WSMoV are transmitted by thrips, resulting in a significant decrease in cucurbit fruit yield and quality [9].
Diagnosis of viral disease with symptomatology is difficult because the symptoms caused by viral infections is similar to nutritional deficiencies. Virus identification is critical for crop disease management. Although various serological and molecular detection methods have been developed to identify virus species, the exploration of unknown viruses without a priori knowledge by these methods is still challenged. Fortunately, this issue can be solved with high-throughput sequencing (HTS) technologies coupled with metagenomic analysis [10]. Several HTS platforms have been launched, such as Roche 454, Illumina, SOLiD, PacBio and Nanopore. Due to the highest accuracy (> 99.9%), biggest output and lowest cost, the Illumina sequencing platforms become the most commonly used tool in plant virology research, including virus detection and whole genome sequencing. However, hyper-efficient computer instruments are required to process the enormous output data of hundreds of gigabases consisting of short 200-bp reads for de novo assembly and sequence alignment [11]. MinION, released by Oxford Nanopore Technologies (ONT, Oxford, UK), is a portable single-molecule sequencer that was designed for researchers with limited resources, and has become an efficient tool for plant virus diagnosis and identification [12][13][14][15]. MinION using nanopore technology can real-time analyze the complete sequence of a single nucleic acid molecule by pulling a single nucleic acid strand through a biologic nanopore, anchored on to it by a molecular motor protein, and determining the nucleotide (nt) sequence by measuring voltage changes [16]. Although MinION is convenient and cost-effective, the accuracy of base calling is relatively low, ranging from 65 to 88% [16,17]. Nevertheless, consensus sequences obtained by de novo assembly or mapping to a reference can be comparable to Illumina sequencing [18].
In June 2020, cucumbers cultivated in a greenhouse in Xizhou Township of Changhua County, Taiwan suffered from severe mosaic disease and deformed fruit symptoms. Symptomatic cucumber samples were collected for virus detection and identification. The virus isolate, designated CX-2, was isolated from one of the collected samples through three successive single-lesion isolations on Chenopodium quinoa leaves. CX-2-inoculated C. quinoa leaf tissue was negative for CCYV, SLCuPV, ZYMV, PRSV-W, MYSV and WSMoV. Icosahedral particles about 30 nm in diameter could be observed in the crud leaf sap. Nanopore sequencing performed with Min-ION indicated CX-2 as cucumber Bulgarian latent virus (CBLV). Subsequently, the whole genome sequence of CX-2 was verified by Sanger sequencing to demonstrate the identity of the virus. The current incidence of CBLV and other viruses in cucumber crops in Taiwan is also addressed in this study.
Virus source and inoculation
Diseased cucumbers showing severe symptoms of leaf mosaic and fruit malformation (Fig. 1a) were collected in Xizhou Township of Changhua County, Taiwan in June 2020. The virus isolate CX-2 was isolated from the diseased cucumber sample '2106-2' through three successive single lesion transfers on C. quinoa leaves. Manual mechanical inoculation was performed for virus transfer, and crude sap of virus-infected leaf tissue ground in 10 mM potassium phosphate buffer (pH 7.0) containing 10 mM sodium sulfite was used as inoculum. The virus was propagated in C. quinoa and Nicotiana benthamiana plants under greenhouse conditions for future studies.
Transmission electron microscopy (TEM)
Small pieces (5 × 5 mm 2 ) of C. quinoa leaves inoculated with viruses were ground and sap droplets were mixed with 2% glutaraldehyde for fixation. Copper grids were floated on the sample droplets for 1 min, the residual liquid on the copper grids was removed with filter paper, and then stained with 2% uranyl acetate. A JEM-2000EX transmission electron microscope (JEOL Ltd., Japan) was used for examination.
Viral dsRNA extraction
Double-stranded RNA was extracted from virus-inoculated C. quinoa leaf tissue as previously described [19]. Briefly, 200 mg of fresh leaf tissue ground with liquid nitrogen was immediately suspended in 600 µl of EBA-30% E buffer (50 mM Tris-HCl pH 8.5, 50 mM EDTA, 3% SDS, 1% β-mercaptoethanol, 1% PVPP-40, adjusted to 30% ethanol) by rolling for 20 min at room temperature and then centrifuged at 16,110 g for 15 min at 4 °C. The supernatant was collected and adjusted to a final concentration of 20% ethanol and then loaded to a micro-column, in which 600 µl of cellulose CF-11 (Whatman, Buckinghamshire, UK) had been equilibrated with 1× STE-20% E buffer (10 mM Tris-HCl pH 8, 100 mM NaCl, 1 mM EDTA, pH 8.0, adjusted to pH 7.8 and 20% ethanol). The micro-column was centrifuged at 100 g for 2 min to remove liquid and then washed twice by adding 450 µl of 1× STE-20% E buffer and centrifuging at 100 g for 2 min. The dsRNA was eluted from the column by adding 400 µl of 1× STE buffer twice and centrifuging at 100 g for 2 min. The eluate was collected and mixed with an equal volume of isopropanol, rolled for 10 min at room temperature and centrifuged at 16,110 g for 30 min at 4 °C. The dsRNA pellet was washed with 70% ethanol, air-dried at room temperature, and dissolved in 50 µl of RNase-free water.
cDNA library construction and ONT nanopore sequencing
First strand cDNA was synthesized with SuperScript IV reverse transcriptase (ThermoFisher Scientific, Waltham, MA) and random hexamers starting from 200 ng of dsRNA. The second strand DNA was synthesized with Klenow fragment of DNA polymerase I (New England Biolabs, Ipswich, MA). The synthesized dsDNA was precipitated by 100% ethanol and then end-repaired and A-tailed by adding with the EA enzyme provided by the KAPA Hyper Prep kit (KAPA Biosystems, Wilmington, MA). The ligation of the treated dsDNA with adaptor motor mix was performed by the ONT ligation sequencing kit following the ONT protocol SQK-LSK109. The prepared dsDNA was loaded on flow cells and sequencing was performed with MinION for eight hours. Reads obtained from sequencing were real-time analyzed using the ONT EPI2ME WIMP workflow.
Metagenomic analysis for taxonomic classification
The taxonomic classification of sequence data was performed by Kraken 2 [20]. The non-redundant nt database was downloaded from the GenBank of National Center for Biotechnology Information (NCBI) and used for building a classification database for Kraken 2 (k = 35, ℓ = 31). Dustmasker and segmasker programs [21] provided as part of NCBI's BLAST suite were used to mask low-complexity regions. Bracken was used to estimate abundance at standard taxonomy level [22]. The output results were confirmed by using BLASTn in NCBI with customized Python scripts.
Viral genome sequencing by Sanger sequencing
Total RNA was extracted from virus-infected C. quinoa leaf tissue using the Plant Total RNA Miniprep Purification kit (GeneMark, GMbiolab, Taichung, Taiwan) according to the manufacturer's instructions. The nt sequences of primers used to amplify viral genome fragments in reverse transcription-polymerase chain reaction (RT-PCR) are shown in (see Additional file 2: Table S1).
5´ and 3´ rapid amplification of cDNA ends (RACE)
The 5´-and 3´-ends of viral genome were confirmed by RACE [23]. Specific primers were designed from the determined nt sequences as shown in (see Additional file 2: Table S1). Total RNA used as template was denatured at 70 °C for 10 min and then put on ice for 1 min. First strand cDNA was synthesized by SuperScript IV reverse transcriptase (Invitrogen) mixing with 200 nM of each primer at 50 °C for 60 min, followed by stop reaction at 70 °C for 15 min. After the removal of template RNA by RNase H (Invitrogen), the cDNA products were precipitated by adding 1/10 volume of 3 M sodium acetate (pH 5.2) and 2.5 volume of absolute ethanol at − 20 °C for overnight. After centrifugation at 17,000 g for 15 min, the pellet was resuspended in 20 µl DEPC-treated water. Subsequently, 200 nM of PolyG oligonucleotide [24] was tailed at the 3´ end of cDNA fragments by 20 U terminal deoxynucleotidy1 transferase (TdT) (New England Biolabs) at 37 °C for 30 min and the reaction was terminated at 70 °C for 10 min. The tailed cDNA fragments were mixed with 2.5 U Blend Taq-Plus (TOYOBO, Osaka, Japan), 200 nM PolyC [24] complementary to the PolyG tail and 200 nM another proper primer as shown in [see Additional file 2:
Viral genome sequence analysis
Genome sequences of different species of the genus Tombusvirus, including distinct CBLV isolates, were obtained from the GenBank database (http:// www. ncbi. nlm. nih. gov/) as shown in (see Additional file 3: Table S2). Sequence identity analysis was performed by AlignX in Vector NTI Suite 10 (Invitrogen). Multiple sequence alignments were performed using the ClusalX 2.1 program in MEGA X [25]. Phylogenetic analyses were analyzed by the Neighbor-Joining method with 1000 bootstrap replicates using the Tree Explorer program in MEGA X. Table S1)
Purification of viral coat protein (CP)
CX-2 CP was purified using the ultra-speed centrifugation method previously described [26] with modifications. Briefly, 100 g of CX-2-infected C. quinoa leaves harvested 3 days post-inoculation (dpi) were homogenized in 300 ml of TB buffer (10 mM Tris-HCl, pH 8.0, containing 10 mM sodium sulfite and 0.1% cysteine) in a blender and centrifugated at 10,000 rpm for 15 min (GRF-L-m2.0-30, Gyrozen, Korea). The supernatants were collected and treated with 1% Triton X-100 at 4 °C for 30 min, followed by centrifugation at 25,000 rpm in Beckman Type 45 Ti rotor for 2.5 h in 20% sucrose cushion. The pellets were then resuspended in TBG buffer (TB buffer containing 10 mM glycine) for isopycnic centrifugation through 32% cesium sulfate at 35,000 rpm in Beckman SW 41 rotor for 17 h. The opalescent zones were collected and precipitated by centrifugation at 45,000 rpm in Beckman Type 70 Ti rotor for 1 h. The pellets were resuspended in TBG buffer and treated with protein sample buffer (50 mM Tris-HCl, pH 6.8, 2% sodium dodecyl sulfate (SDS), 12% glycerol, 0.01% bromophenol blue and 2% β-mercaptoethanol) at 100 °C for 3 min. Proteins were separated in 12% SDS-polyacrylamide gel electrophoresis (PAGE) and visualized by soaking the gels in cold 0.3 M KCl. The desired protein was cut and eluted from the gel using a Model 422 Elutro-Eluter (Bio-Rad, Hercules, CA). The yield of the purified CP was estimated by the software Spot Density of AlphaInnotech IS2000 (AlphaInnotech Corporation, San Leandro, CA) by comparison with the quantified bovine serum albumin (BSA) as previously described [27].
Production of rabbit antiserum
One hundred microgram of the purified CP dissolved in 1 ml of PBS buffer (136 mM NaCl, 1 mM KH 2 PO 4 , 8 mM Na 2 HPO 4 ‧12H 2 O, 2 mM KCl and 3 mM NaN 3 ) was emulsified with an equal volume of Freund's complete adjuvant (BioSmart, South Korea) and injected subcutaneously into a New Zealand white rabbit. One week later, the rabbit was injected weekly with 100 µg of the same immunogen in 1 ml of PBS emulsified with an equal volume of Freund's incomplete adjuvant (BioSmart) for two weeks. Blood was collected weekly from the ear marginal veins of the rabbit for one month, starting from 1 week after the third injection. The collected blood was incubated at 37 °C for 1 h and antiserum was collected from the supernatant after centrifugation at 8100 g for 10 min.
Enzyme-linked immunosorbent assay (ELISA)
Indirect ELISA was conducted as previously described with modifications [28] for antiserum titration and virus (5′-CCG AAT CAT AAA ATA GAT CCGG-3′) to SLCuPV, designed in this study, were used for nucleic acid amplification. The primer pair nad5-s (5′-GAT GCT TCT TGG GGC TTC TTGTT-3′)/nad5-as (5′-CTC CAG TCA CCA ACA TTG GCA TAA -3′) for amplifying NADH dehydrogenase subunits 5 (nad5) gene was used as plant internal control [30]. The One-Step RT-PCR kit (Gen-eMark) was used in RT-PCR analysis as described by the manufacturer. The amplification conditions were set as 50 °C for 30 min, followed by 94 °C for 2 min, and then 35 cycles of 30 s at 94 °C, 30 s at 58 °C, and 1 min at 72 °C and a final reaction at 72 °C for 7 min. Indirect ELISA was performed as mentioned above to detect MYSV, WSMoV, PRSV-W and ZYMV using individual antisera described previously [9].
CX-2 particle morphology
CX-2, isolated from a diseased greenhouse-grown cucumber, induced visible necrotic spots on inoculated C. quinoa leaves at day 3 post-inoculation (Fig. 1b). Furthermore, a large number of negative-staining icosahedral virus particles with a diameter of about 30 nm were observed in the crude sap of CX-2-infected C. quinoa leaves 3 dpi (Fig. 1c); therefore, this leaf material was used for subsequent studies.
Identification of CX-2 as CBLV
The dsRNA extracted from CX-2-infected C. quinoa leaves was used as template to construct a random primer-primed cDNA library for sequencing. The ONT MinION was used to read nt sequences. A total of 7408 reads were analyzed, of which 13 reads were mapped to CBLV of the Tombusvirus genus in the Tombusviridae family. The full-length genome sequence of the original CBLV isolate (acc. no. AY163842) was used as a reference to align with the classified reads by BLASTn, showing that the reads share nt identities of 83.0-95.7% with the reference sequence, with a genome coverage of 73.3% (see Additional file 4).
Furthermore, Sanger sequencing using the newly designed primers was performed to elucidate the fulllength genome sequence of CX-2. Five overlapping fragments, including 5´ and 3´ RACE, amplified from the total RNA of CX-2-infected C. quinoa leaves by RT-PCR were cloned for sequencing. The complete genome sequence of CX-2, containing the open reading frames (ORFs) of RNA-dependent RNA polymerase (RdRp, nt 135-2507), CP (nt 2526-3668), movement protein (MP, nt 3709-4260) and 19-kDa protein (P19, nt 3741-4247), was determined as 4577 nt in length and deposited in the GenBank (acc. no. MW359100) (Fig. 2). The whole genome sequence of CX-2 shared 97.0% and 97.2% of nt identity with those of two CBLV isolates in Iran and Bulgaria, respectively. The genomic ORFs of CX-2 shared Fig. 4 Purification of CBLV-TW coat protein (CP) from infected Chenopodium quinoa leaves by ultra-speed centrifugation method. a An obvious opalescent band indicated by a white arrow was observed after isopycnic centrifugation through 32% cesium sulfate. b Individual centrifugation fractions of purification procedures, 10 K rpm, 25 K rpm and 45 K rpm, were analyzed by 12% SDS-PAGE. S and P represent supernatant and pellet, respectively. c Immunoblotting using the produced antiserum RAs-CBLV was conducted to detect the purified CBLV CP (lane CP). The crude extract of CBLV-TW-infected C. quinoa leaf was used as positive control (lane CBLV). Protein marker (lane M) was loaded as molecular weight standard. CBLV-TW CP is indicated by black arrows 95.8-98.9% and 96.8-99.4% of nt and aa identities, respectively, with those of the CBLV Bulgarian and Iran (W12-101) isolates, but shared 50.5-82.3% and 33.7-83.1% of lower nt and aa identities, respectively, with those of other tombusviruses (Table 1). Phylogenetic analyses of RdRp, CP, MP and P19 indicated that CX-2 is closely related to the two known CBLV isolates (Fig. 3). Taken together, CX-2 was identified as an isolate of CBLV and renamed CBLV-TW.
Purification of CBLV-TW CP
Leaf tissues of C. quinoa inoculated with CBLV-TW 3 dpi were harvested for CP purification. An obvious opalescent band was observed near the center of centrifuge tube after isopycnic centrifugation through 32% cesium sulfate. The expected 41 kDa of CP was obtained from the substances within the opalescent band (Fig. 4).
Approximately 800 µg of CBLV-TW CP could be purified from 100 g of C. quinoa leaf tissue.
Responses of the produced anti-CBLV CP antiserum
The rabbit antiserum produced from the purified CBLV-TW CP as immunogen was denoted RAs-CBLV. A serial dilution of RAs-CBLV was used to react with the crude leaf saps of CBLV-infected C. quinoa plants at a 1/50 dilution in indirect ELISA and the endpoint dilution was determined as 1/640,000 (the average reading of CBLV = 0.1547 compared with the healthy negative control (H) = 0.079) (Fig. 5a). RAs-CBLV was recommended to use at a 1/5000 dilution. Reciprocally, the purified CP was serially diluted to react with RAs-CBLV showing a limit of 100 pg for RAs-CBLV in indirect ELISA (Fig. 5b). RAs-CBLV was negative for other tested virus species, including broad bean
Detection of virus infection in field cucumber crops
RAs-CBLV was used to detect CBLV-TW in the field cucumber sample '2106-2' , the original host of CBLV-TW, in indirect ELISA (Fig. 6a). In addition, the CBLVspecific primer pair CBLV3900F/CBLV4576R was used to amplify the expected 678-bp DNA fragment from total RNAs of '2106-2' (Fig. 6b). The amplicons were sequenced to confirm correctness.
Discussion
CBLV was first reported in Bulgaria in 2003 and identified as a new member of the genus Tombusvirus in the family Tombusviridae according to the virion morphology, genome sequence homology and serological relatedness [31]. Ten years later, CBLV was found again in Iran [32]. This is the third report of CBLV emerging in Taiwan in 2020. All CBLV isolates were from cucumber. Similar to co-infection of CBLV with watermelon mosaic virus (WMV) reported in Bulgaria [31], co-infection of CBLV and MYSV has occurred in Taiwan. Nevertheless, different from the previous reports [31,32], in our study, the pure CBLV cultures, CX-2 and CX-5, were obtained from single lesions on C. quinoa leaves. We found that C. quinoa is an ideal host to provide CBLV material for virion The primers CBLV3900F and CBLV4576R were used in RT-PCR amplification. Cucumber samples, '2106-1' to '2106-5' , collected from Xizhou, Changhua in June 2020 were used for CBLV detection. Sample '2106-2' is the source of CBLV-TW. CBLV-TW-infected and healthy Chenopodium quinoa leaf tissues were used as positive and negative controls, respectively. The expected size of amplicons in RT-PCR assay is indicated by an arrow examination and purification of dsRNA, total RNA and CP due to the consistent viral accumulation 3 days after inoculation. The possibility of contamination by other pathogens, such as MYSV, can also be ruled out.
On the other hand, nanopore sequencing using Min-ION was performed to identify CBLV. The ONT nanopore sequencing and real-time analysis technology significantly facilitated virus identification within 48 h, but the accuracy of base calling remained relatively lower, with 83.0-95.7% identities to the reference sequence. Also, too many gaps make the reads impossible to assemble. Therefore, Sanger sequencing is necessary to determine the viral whole genome sequence. After validation of the CBLV-TW genome sequence, whole genome sequence analysis of the three CBLV isolates revealed that the current CBLV isolates share the same genome structure and 97-98% sequence homology. However, the genome length of the Taiwan isolate CBLV-TW is one base longer in which the 3′ untranslatable region than that of the other two. CBLV-TW is also phylogenetically distinct from the Bulgarian and Iran isolates (Fig. 3), suggesting that CBLV has diversified geographically.
Furthermore, detection methods for CBLV have been developed and applied in field surveys. In addition to specific primers for RT-PCR analysis, we also produced rabbit antisera against CBLV-TW CP for virus detection. A modified organic solvent-free centrifugation method [26] was conducted to efficiently purify the immunogen. The produced RAs-CBLV is highly sensitive and speciesspecific, and its utility in indirect ELISA for CBLV detection is comparable to RT-PCR analysis (Fig. 6).
The current incidence of cucumber viruses in Taiwan was investigated. Symptomatic cucumber samples were collected from Yunlin, Changhua and Taichung, the important cucumber cultivation areas in central Taiwan, in 2020-2022 for assay. In addition to CBLV, multiplex-RT-PCR previously developed in our laboratory was conducted to simultaneously detect four important cucurbit-infecting viruses in Taiwan, namely the whitefly-transmitted CCYV and SLCuPV and the thripstransmitted MYSV and WSMoV. Indirect ELISA with individual antisera specific to CBLV, MYSV, WSMoV, PRSV-W and ZYMV was also performed. Our results showed that MYSV and CCYV were the main cucumber viruses in Taiwan with detection rate of 74.7% and 32.0%, respectively. PRSV-W (10.7%), SLCuPV (9.3%) and CBLV (8.0%) occurred occasionally. ZYMV was not detected in all cucumber samples tested. Mix-infections of MYSV with the other viruses, including CBLV, could be detected, but were not frequently.
Cucumber crops are grown in greenhouses in Taiwan to isolate crops from pests; however, it is difficult to prevent the entry of tiny insects, such as thrips and whiteflies [33]. Melon thrips (Thrips palmi Karny), the vector of MYSV and WSMoV [9], and silverleaf whitefly (Bemisia argentifolii Bellows & Perring), the vector of CCYV and SLCuPV [4], are common insect pests in greenhouse and have serious effects on many vegetable crops in Taiwan, especially Cucurbitaceae and Solanaceae crops. Although these insect-borne viruses can infect a variety of cucurbit crops, they exhibit different host preferences. Genetic diversity of crop varieties is also associated with virus susceptibility. For instance, both MYSV and WSMoV can infect melons and watermelon; however, MYSV prefers melons, whereas WSMoV prefers watermelons [9]. Our results show that cucumbers are very susceptible to MYSV and CCYV, but resistant to ZYMV. This may be related to the genetic resources of favored cucumber varieties in Taiwan.
No single CBLV infection was detected. Plants infected with CBLV alone may be ignored due to latent infection [31]. Unusual severe symptoms were observed on the cucumber plants co-infected with MYSV and CCYV (Fig. 1a), suggesting a synergy between MYSV and CCYV. The synergistic effect of MYSV and CBLV on cucumber plants can be investigated in the future.
Conclusion
The emerging CBLV-TW was identified by single lesion isolation and nanopore sequencing. Whole genome sequence analysis revealed that CBLV-TW is closely related, but phylogenetically distinct, to two known CBLV isolates in Bulgaria and Iran. Detection methods including RT-PCR and indirect ELISA have been developed to detect CBLV and to investigate cucumber viruses in central Taiwan. The 2020-2022 field survey results showed that MYSV and CCYV were the main threats to cucumbers, with CBLV, SLCuPV and WSMoV were occasionally occurring. | 5,842.2 | 2022-12-22T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Tuning the Electronic Property of Reconstructed Atomic Ni‐CuO Cluster Supported on N/O‐C for Electrocatalytic Oxygen Evolution
Abstract Electrochemical activation usually accompanies in situ atom rearrangement forming new catalytic sites with higher activity due to reconstructed atomic clusters or amorphous phases with abundant dangling bonds, vacancies, and defects. By harnessing the pre‐catalytic process of reconstruction, a multilevel structure of CuNi alloy nanoparticles encapsulated in N‐doped carbon (CuNi nanoalloy@N/C) transforms into a highly active compound of Ni‐doped CuO nanocluster supported on (N/O‐C) co‐doped C. Both the exposure of accessible active sites and the activity of individual active sites are greatly improved after the pre‐catalytic reconstruction. Manipulating the Cu/Ni ratios of CuNi nanoalloy@N/C can tailor the electronic property and d‐band center of the high‐active compound, which greatly optimizes the energetics of oxygen evolution reaction (OER) intermediates. This interplay among Cu, Ni, C, N, and O modifies the interface, triggers the active sites, and regulates the work functions, thereby realizing a synergistic boost in OER.
Introduction
The oxygen evolution reaction (OER) is a critical process in various advanced energy conversion devices such as water electrolysis and rechargeable metal-air batteries, in which process involves a four-electron transfer and several adsorbed intermediates (OH*, O*, and OOH*). [1,2]However, the kinetics of O─O bond formation and the tri-phase reaction at the interface of gasdiffusion and ion adsorption lead to sluggish rates, which require electrocatalysts to provide abundant active sites to facilitate OER. [3,4]Comparing to the benchmark of noble-based electrocatalysts, transition metal compounds, with the superiority of earth-abundance, non-toxicity, and regulable electronic properties, serve as promising candidates for long-lived and highefficiency OER catalysts. [5,6]Nevertheless, most transition metal compounds require relatively high overpotentials to drive OER efficiently, resulting in increased energy consumption. [7,8]Therefore, designing and fine-tuning the electronic properties of the OER catalysts to decrease overpotentials is necessary. [9,10]he copper-based materials are one of the most widely used materials for electrochemistry and catalysis. [11]However, when reported as OER electrocatalyst, copper-based materials typically show less satisfactory catalytic performance compared to the wellstudied nickel/cobalt-based composites. [12]This is because the filling of the anti-bonding state of Cu (3d 10 4s 1 ) is higher than those of Ni (3d 8 4s 2 ) and Co (3d 7 4s 2 ) when O 2p band of oxygen intermediates hybridizes with the metal d band. [13]Through isolating active sites can realize the regulation of local coordination environment so as to achieve active and selectivity. [14]tomic clusters contain diverse geometric and electronic structures, plentiful active sites, and more moderate interaction intensity as well as overcome the problem of unsatisfactory stability of the single atom. [15]Moreover, with the electrochemical conditions of OER in alkaline solution, most transition metalbased catalysts tend to undergo interfacial evolution and activated reconstruction. [16]The most essential driving force for reconstruction is the surface chemical conversion.It has been found that the adaptive surface evolution and degree of reconstruction are initially determined by the physicochemical features of precatalysts, and then strongly impacted by the reaction and service conditions as well as their interactions during OER, such as local pH and its gradient distribution, applied potential, types and concentration of exotic ions, and external fields on top of the catalysts. [17]The reconstruction can be either reversible or irreversible over time based on intrinsic properties of pre-catalysts and external conditions. [18]The process accompanying atomic rearrangements forms an active amorphous phase or atomic nanocluster/single atoms with abundant dangling bonds, vacancies, and defects, which enhances the exposure of accessible active sites and activity of individual active sites. [19]erein, a series of CuNi alloy nanoparticles encapsulated in Ndoped carbon (noted as CuNi nanoalloy@N/C) were obtained by carbonizing Cu/Ni bi-metal complexes.When employed as OER pre-catalysts, the CuNi nanoalloy@N/C undergoes an in situ reconstruct process transforming into a high-active compound of Ni-CuO clusters supported on N/O co-doped C. Controlling the Cu/Ni ratios within CuNi nanoalloy@N/C can tune the electronic property and d-band center of the high-active compound, greatly optimizing the adsorption/ desorption strength of intermediates.The interplay among the Cu, Ni, C, N, and O modifies the interface and triggers on the active sites as well as regulates the work functions, thus synergistically boosting an active OER.
Results and Discussion
CuNi nanoalloy@N/C was prepared via a two-step synthetic route.The synthetic route was reasonably designed, as depicted in Figure 1a, with Ni (II) initially coordinating with alanine, followed by adding a high-concentration Cu (II) solution for further coordination and substitution reactions.This sequence was chosen because Cu (II) ion is more prone to forming stable coordination compounds compared with Ni (II) ion due to its smaller ion radius and higher ionization potential.(More detailed information can be supplied in The Experimental Section).The Cu/Ni bimetallic complexes (P1-P7) with varying Cu/Ni ratios were formed as a result and Table S1 (Supporting Information) provides specific details about the amounts of reagents used in the synthesis.Figure S1 (Supporting Information) shows the optical photographs of Cu/Ni bimetallic complexes, in which can be observed that the color of P1-P7 changes slightly with the varying Cu/Ni ratios.Field emission scanning electron microscopy (FE-SEM) images in Figure S2 (Supporting Information) reveal that the complexes exhibit a 2D plate morphology with a micro-scale.Additionally, the samples exhibit a gradual morphology transformation that the length-width ratio was enhanced when decreasing the Cu/Ni molar ratio.
Thermogravimetric (TG) analysis was conducted on P1-P7 calcined in an Ar atmosphere.The significant weight loss (≈70%) at 200-300 °C is related to the decomposition of P1-P7, as shown in Figure S3 (Supporting Information).The platforms in the TG curves indicate fractures in the different coordination bonds, demonstrating the effect of Cu/Ni ratio on the stabilities of complexes.The increase in temperature leads to a collapse in the coordination structures of P1-P7, releasing partial C, N, H, and O as gas/ion currents.This process causes the formation of an atomically dispersed porous structure of N-doped amorphous carbonsupported CuNi alloy nanoparticles.
The X-ray diffraction (XRD) patterns of P1-P7 in Figure S4a (Supporting Information) show typical peaks at ≈10°, possibly due to the 2D plate crystalline with specific crystallographic plane orientation.Additionally, Figure S4b (Supporting Information) exhibits the XRD patterns and reveals characteristic peaks at 43.5°, 50.6°, and 74.1°, which can be well indexed to the (111), (200), and (220) lattice planes of the CuNi alloy (JCPDS#70-3039).The slight shift of characteristic peaks with the increasing Ni proportion indicates the incorporation of Ni atoms into the lattice substituting Cu atoms. [20]The P1-P7 precursors retained their 2D morphology with a micrometer size after calcination, as shown in Figure S5 (Supporting Information).The size of the samples decreased as the Ni proportion increased, and their morphologies evolved from microplates to curly nanobelts.The FT-IR spectra of P1-P7 and M1-M7 are shown in Figure S6 (Supporting Information)., in which the stretching modes of carboxylate, C─C, and C─H bending vibrations were observed to peak within the spectral region of 1650-1250 cm −1 .The vibrational bands at 3455, 3261, 1621, and 1283 cm −1 , which correspond to (O─H), (N─H), free water, and (C─H) respectively, are the characteristic peaks of crystal water complexes. [21]The FT-IR spectra of M1-M7 (Figure S6b, Supporting Information) display peaks corresponding to O─H, C═O, C─O, and C─O─C groups, respectively.These peaks may be attributed to a small number of functional groups or residual dangling bonds after the calcination process, resulting from the breakage of the intact coordination number. [22]uring the electrochemical reaction, active sites of carbon-based materials mainly reside within the lattice defects or edges generated by the break or redistribution of charge/spin in the sp 2 conjugated within the carbon matrix.
The as-obtained pre-catalysts undergo an in situ reconstruction process during electrochemical activation of circulation voltammetry (CV), and the morphological and structural change of M2 are demonstrated in Figure 1. Figure 1b shows the XRD patterns of fresh M2 and M2 after CV.When the CV is performed for 50 cycles, the XRD pattern of M2-CV50 shows the combined characteristic peaks of CuNi alloy (JCPDS#70-3039) and CuO (JCPDS#80-0076).These characteristic peaks of M2-CV50 vanish after 100 cycles of CV, with only an ambiguous peak at ≈44.3°, speculated to be caused by the formation of a CuNi hydroxide.The FE-SEM and Transmission electron microscopy (TEM) images of M1-7 are presented in Figure S6 (Supporting Information) and Figure 1c,d.These images illustrate that the CuNi alloy NPs have a size of approximately 30 nm and are surrounded by amorphous carbon.Figure 1d indicates the presence of lattice fringes with dimensions of 0.21 and 0.19 nm, which can be assigned to the (111) and ( 200) crystal faces of the CuNi alloy, respectively.The selected area electron diffraction (SAED) pattern inserted in Figure 1c suggests the polycrystalline nature of the sample due to the presence of diffraction rings.Furthermore, Figure 1d highlights the presence of functionalized carbon produced during the pyrolysis process, which is shown as a yellow circle.The EDS mapping images (Figure 1f) and EDS line scan in Figures S7-S11 (Supporting Information) show that the atomic homogeneous CuNi alloy nanoparticles are evenly distributed throughout the hierarchically structured samples and mainly coated with N-doped carbon.
Figure S12 (Supporting Information) exhibits the TEM images of M2 after CV test of 50 and 100 cycles, respectively.It can be speculated that as the CV test went on, CuNi alloy nanoparticles gradually transformed into 2D sheet Cu/Ni hydroxide with slight agglomeration of the sample.A strong metal-support interaction (SMSI) which is rigorously defined as the interaction between metal and support could impact metal dispersion as well as alter the electronic and geometric structures of loaded metals.The dispersed metals typically exhibit a highly dispersed form either as layer-like structures or few-atom per single-atom clusters because of the electronic metal-support interaction or reactive metal-support interaction. [18]This means Cu and Ni sites tend to be evenly distributed to form oxide clusters, only a small number of single atoms are distributed on carbon, as shown in Figure 1g.According to the content of each element of M2 in Figure S13 (Supporting Information), Cu 53.67%, Ni 12.49%, C 25.07%, N 7.01%, O 1.76%.It can be speculated that after the precatalytic transform from CuNi nanoalloy-N/C to Ni-doped CuO nanocluster-N/C, the content of O increased significantly, while the content of C and N partially decreased.Single atoms are supported on the N-doped C which shows a thermodynamic instability above 0.207 V theoretically, while OER generally occurs at a high potential over 1.4 V (vs RHE).This means that carbon may show the unsatisfied stability to support the single atom during OER. [23]Based on this, the content of single atom is very small in the reconstructed M2, and it cannot continuously catalyze OER due to the instability of carbon framework.Therefore, the oxide clusters playing a major role in the reaction process should be mainly analyzed. [24]The aberration-corrected high-angle annular dark-field scanning transmission electron microscope (AC-HAADF-STEM) measurement with sub-angstrom resolution is used to confirm the presence of Ni /Cu oxide cluster (marked in yellow) immobilized on CNO frameworks, as shown in Figure 1g.The EDS mapping images of M2-CV in Figure 1h prove the uniform distribution of all atoms at micron-scale.
X-ray photoelectron spectra (XPS) were conducted to provide evidence for the changes in electronic properties of the M1-M7 as shown in Figure S14 (Supporting Information).The Cu 2p 1/2 (≈951 eV) and Cu 2p3/2 (≈932 eV) peaks in Figure S14a (Supporting Information), the slight peak shift indicates the different oxidation states of M1-M7.Similarly, in Figure S14b (Supporting Information), the shift of peaks in Ni 2p 1/2 (≈868 eV) and Ni 2p 3/2 (≈852 eV) reveal the various oxidation of metallic states of Ni 2+ and Ni 0 .Notably, as the proportion of Ni increases from M1 to M7, the Cu 2p binding energy slightly decreases while the Ni 2p binding energy increases.This trend can be explained by the redistribution and disorder of electrons in the coordination environment of the atoms, which leads to an opposite shift of the Cu 3d band towards the Fermi level and subsequently tunes the adsorption energies of OH * . [23]Furthermore, the M-O peaks of M1 and M7 in the O 1s spectrum (Figure S14c, Supporting Information) reveal that monometallic composites (M1 with Cu and M7 with Ni) exhibit exacerbated oxidation compared to bimetallic composites (M2-M5).This difference may be attributed to the formation of a "metallic protection" by the CuNi alloy, which inactivates the oxidation of the metallic materials.Moreover, the N 1s and C 1s spectra of M1-M7, shown in Figure S14d,e (Supporting Information), provide further insights.The N 1s spectrum displays two peaks corresponding to graphitic N (401.8eV) and pyridinic N (398.2eV).The C 1s spectrum consists of peaks representing the C─C sp 2 bond at 284.5 eV, C─C sp 3 bond at 285.3 eV, and C─N bond at 286.2 eV.Finally, the shifts in peak centers observed in the N 1s and C 1s spectra of M1-M7 illustrate the changes in the content of N or C species, possibly resulting from the different electronic affinities of Cu and Ni in the CuNi alloy with different Cu/Ni ratios.The samples' porous feature is evident from the N 2 adsorption-desorption isotherms (Figure S15, Supporting Information).After calcination, the Brunauer-Emmett-Teller (BET) surface area of CuNi alloy@ N/C increased nearly tenfold compared to Cu/Ni bimetallic complexes.Notably, M2 has the highest BET surface area (198.24m 2 g −1 ) among M1-M7.The porous structure provides abundant exposed active sites and promotes electron transfer and mass transport during electrochemistry reactions.
The electrochemical activation of pre-catalysts is performed by CV test (Figure S16, Supporting Information; Figure 2d), all of which undergo the reconstruction process and show lower overpotential after the in situ reconstruction.In Figure 2d, after the activation of CV, the REDOX peak of M2 decreases and shifts, which is caused by the oxidation and reconstruction of CuNi alloy NPs.The linear scan voltammetry (LSV) curves of after the precatalytic procedure are shown in Figure 2a, M2 exhibits an earlier current response compared to the other samples, with only a potential of 1.41 V (overpotential of 180 mV) to achieve a current density of 10 mA cm −2 .Furthermore, a potential of 1.59 V (overpotential of 360 mV) is needed to reach a current density of 100 mA cm −2 .The anodic peaks observed in the polarization (Insert of Figure 2a) are attributed to oxidation of the metal sites occurring on the surface.Tafel slopes of M1-M7 electrochemical activation of CV were obtained through tests and post-calculation of LSV, as shown in Figure 2b.In Figure 2c, overpotentials and Tafel slopes of M1-M7 after CV activation are compared, M2 shows the lowest overpotential and Tafel slope of the catalysts, exhibiting a first-class electrocatalytic performance.The electrochemically active surface area (ECSA) and surface roughness were calculated using the electrochemical double-layer capacitance (C dl ) in the non-faraday range, as represented in Figure S17 (Supporting Information).After electrochemical activation, M2-CV exhibited the highest C dl value of 22.51 mF cm −2 , indicating significant roughness and a substantial number of active sites.Analyses of the intrinsic kinetics changes were conducted through the three-dimensional AC impedance curves of M1-7, wherein the Cu/Ni ratio was regulated (Figure S18, Supporting Information).A lower charge-transfer resistance (R ct ) is favorable as it indicates a lower interface reaction resistance and better OER performance.The structure, conductivity, degree of wettability, and access of the catalyst to the electrolyte greatly influence the reaction rate of the working electrodes/substrates.
Raman spectra (Figure S19, Supporting Information) were conducted to show the prominent features of the functionalized carbon in M1-M7.The D band (≈1339 cm −1 ) denotes the defect sites of disordered sp 3 carbons, while the G band (≈1594 cm −1 ) reflects the sp 2 graphitic carbons.The I D /I G ratios of the samples, ranging from 0.81 to 1.02, are affected by the amount of Cu and Ni present.The higher the I D /I G ratios, the more the electron transfer and catalytic centers improve.Notably, M2 generated a relatively high I D /I G ratio, implying a proper metallic ratio that induces more defects to the catalyst surface.Previous literatures has reported that the carbon renders large surface area enhances the ability of electrolyte diffusion, smooths the pathway for electron and mass transfer, and stabilizes the skeletal structure.The adjustment of Cu/Ni ratio results in modulated local coordination environments and electronic structures through the synergistic interplay between Cu and Ni.This generates defects at lattice or edges, forming abundant active sites that ultimately facilitate the OER.
To investigate the process of in situ reconstruction of M2, the structural characterization and valence states after electrochemical activation were obtained through the characterization of high-resolution transmission electron microscopy (HRTEM), XPS, and AC-HAADF-STEM.The HRTEM images in Figure 3a depict the obvious reconstruction of M2 after CV activation for 100 cycles (M2-CV).The 3D CuNi alloy NPs@N/C gradually converted into 2D hetero-plate of Ni-doped CuO cluster supported on CNO frameworks.The planes of (−111) of CuO (JCPDS#80-0076) in Figure 3a illustrate the surface evolution and structure reconstruction during the electrochemical activation.The application of a 3D false-color image in Figure 3b effectively demonstrates the presence of single metal atoms on the amorphous support within the M2-CV.Figure 3e,f exhibits the electron energy loss spectrum (EELS) and the corresponding selected area is shown in Figure 3c,d (blue square), which further confirms the composition of M2-CV at the atomic level.After the reconstruction, the catalyst converts into a hetero-plate of Ni- doped CuO cluster (Figure 3d,f) supported on CNO frameworks (Figure 3c,e).
Ultraviolet photoemission spectroscopy (UPS) is used to explore the electronic properties of reconstructed pre-catalysts in depth.As shown in Figure 4a, the valence band (E V ) for M2 and M2-CV is located at 2.21 and 2.45 eV below Fermi energy (E F ) by linearly extrapolating the leading edge of the spectrum to the baseline.In addition, the work function (Ф) can be calculated using Ф = hv-E onset where h is the incident photon energy (40.2 eV) and E onset is the onset level related to the secondary electrons, as shown in Figure 4b.The corresponding work functions of M2 and M2-CV are calculated to be 4.78 and 4.53 eV, which reflects the dynamics of electrons on the surface of the samples.The M2-CV shows higher Fermi level and smaller work function than M2, which illustrates that the pre-catalyst possesses better electrical conductivity and could deliver electrons more easily after the in situ reconstruction.All the characterizations reveal a significant reconstruction during pre-catalytic process involving the oxidation of the metal, formation of oxygen/nitrogen-related defect sites, and the absorption of carboxylate groups on the catalyst interface. [25]n Figure 4c, XPS of Cu 2p for M2-CV at ≈953.5 and ≈932.8 eV indicate the main metallic state of Cu 2+ and Cu 0 respectively.And the main metallic state of Ni covert into content of Ni 2+ in M2-CV, in line with the O 1s spectrum in Figure 4e (OH − 531.6 eV, O─Cu 530.4 eV, and O─Ni 528.9 eV). [26]In Figure S20a,b (Supporting Information), the characteristic peaks of C 1s and N 1s become more complicated after the electrochemical reaction due to the new appearance of new C═C (283.5 eV), C─O (287.3 eV), C─F bonds from Nafion.The N 1s shows that graphitic N (401.8eV) fades away and the M─N (399.5 eV) comes out during the OER.In Figure S21 (Supporting Information), the Mott-Schottky curve slope of M2 increases after activation indicating a higher carrier concentration, which is possibly due to the introduction of electrons or holes, enhancing their mobility.This leads to greater availability of electrons and holes for charge transport.
The X-ray absorption near edge structure (XANES) spectra of Cu K-edge and Ni K-edge are exhibited in Figure 4f,g.The absorption energy of M2-CV is between those of Cu foil and commercial CuO as well as Ni foil and commercial NiO suggesting that both Cu and Ni in M2-CV have a valence state between 0 and +2.Wavelet Transform Spectra WT-EXAFS contour plots of the Cu and Ni K edge-extended X-ray absorption fine structure (FT-EXAFS) of M2-CV and commercial NiO/CuO are shown in Figure 4h-m respectively.Figure 4h shows two prominent peaks at ≈1.60 and 2.60 Å, corresponding to the nearest neighbor Ni─O and next-nearest neighbor Ni─Ni coordination shells in commercial NiO. [27]While in Figure 4l, the peak attributed to Ni─O (N) The density functional theory (DFT) calculation was further performed to understand the superior performance of CuNi alloy pre-catalysts on OER.The electronic structures of the bulk alloy systems are first studied.As shown in Figure 4j, the d-band center of Cu upshifts through interaction with Ni, resulting in increased oxygen adsorption of the alloy surface.Then we simulated the OER process on the oxidized surfaces.Compared with the widely studied NiOOH, CuO is commonly considered as an inefficient electrocatalyst.In bulk, the coordination number of Cu is 4. As exposed on the surface, half of the copper atoms are undercoordinated with coordination number of 3 which are supposed to be the reactive sites.In the M2-CV, the CuO nanocluster with Ni doped shows coordination numbers of 2.4/2.4 with Cu─O /Cu─N respectively (Table S2, Supporting Information). [28]As shown in Figure 4k, the active Cu sites are readily occupied to form *OH, corresponding to Cu 2+ species.However, the further oxidation to *O is quite difficult since copper has been in its highest valence oxidation state.As a result, the formation of *O from *OH is the potential determining step (PDS) with an energy change of 2.37 eV.After introducing under-coordinated nickel, the conversion between Ni 2+ and Ni 3+ may stabilize *O intermediate and thus significantly promote the oxidation step to *O.The barrier of PDS decreases to 1.97 eV.Therefore, the CuO surface with Ni dopant exhibits improved performance on electro-catalyzing OER. [29]
Conclusion
Taking advantage of the electrochemical activation of in situ atom rearrangement, the pre-catalysts transformed from the original multilevel structure of CuNi alloy nanoparticles encapsulated in N-doped carbon (CuNi nanoalloy@N/C) into a high-active compound of Ni-doped CuO cluster supported on CNO frameworks.Through manipulating the Cu/Ni ratios of CuNi nanoalloy@N/C, the electronic property and d-band center of the high-active compound are tailored to modulate the adsorption/desorption energy of OER intermediates.This optimized interplay within Cu, Ni, C, N, and O modifies the interface, triggers the active phases, and regulates the work functions of active sites, thereby realizing a synergistic boost in OER.Utilizing SMSI effects induced by electrochemistry activation is a promising approach to enhance metal dispersion and create metal/support interfaces on supported metal catalysts.Eventually, exquisite control of SMSI can be realized after properly exploiting the dynamic transformation processes, which helps with rational design of multicomponent catalysts.
Figure 1 .
Figure 1.a) Schematic illustration for the synthesis process of CuNi<EMAIL_ADDRESS>XRD patterns of fresh M2 and M2-CV.c,d) TEM images of M2, and the insets of c) Selected-area electron diffraction (SAED) patterns; e) corresponding TEM-EDS mapping images.Structural characterization of reconstructed M2 after CV activation for 100 cycles: f) TEM images and the corresponding SAED patterns; g) the AC-HAADF-STEM image and the high-loss EELS image of corresponding Cu/Ni oxide cluster; h) corresponding TEM-EDS mapping images.
Figure 2 .
Figure 2. a) Linear scan voltammetry (LSV) curves of the pre-catalysts after electrochemical activation of 100-cycle circulation voltammetry (CV) for M1-M7; b) the corresponding Tafel slope with an overpotential at ≈10 mA cm −2 of M1-M7; c) Comparison of the values of Tafel slope with potentials at 10 mA cm −2 (E J = 10 mA cm −2 ) and 100 mA cm −2 (E J = 100 mA cm −2 ) of M1-M7 after electrochemical activation; d) LSV curves of fresh M2 and M2 after 100-cycle CV test, and the inset in d) the corresponding CV curves.
Figure 3 .
Figure 3. a) HR-TEM of M2-CV; b) The corresponding 3D false-color image of one HAADF-STEM image of M2-CV, in which different colors represent the intensities of pixels, (The red peaks in the 3D image indicate the precise locations of individual metal atoms); e,f) High-loss EELS images and the corresponding selected area (marked with blue square) of c,d).
Figure 4 .
Figure 4. Ultraviolet Photoelectron Spectra (UPS) of M2 and M2-CV: a) valence-band spectra and b) secondary electron cutoff.The high-resolution XPS spectrums of M2 and M2-CV: c) Cu 2p, d) Ni 2p, e) O 1s. f) normalized Cu K-edge XANES spectra, g) normalized Ni K-edge XANES spectra, Wavelet Transform spectra WT-EXAFS contour plots of h) commercial NiO, i) commercial CuO, l) metallic Ni, and m) Cu respectively; j) d-band centers of Cu-Ni systems, k) Free energy profile of OER process on Cu(111) and Ni-doped Cu(111) surfaces.The structures of key intermediates are shown in the insets.The barriers of PDS are also marked.interaction located at 1.75 Å and Ni─Ni(Cu) is shifted to 2.75 Å (Figure S22, Supporting Information).Figure 4i shows two prominent peaks at ≈1.55 and 2.55 Å, corresponding to the nearest neighbor Cu─O and next-nearest neighbor Cu─Cu coordination shells in commercial CuO.In Figure 4m that the Fourier transformed Cu K-edge EXAFS of M2-CV, the peak belonging to Cu─O (N) interaction located at 1.50 Å and Cu─Cu (Ni) is shifted to 3.05 Å (Figure S23, Supporting Information).The details structural parameters calculated from the EXAFs fittings are shown in Table S2 (Supporting Information), comparing to the commercial CuO and NiO, these changes of distance between Cu─O/Ni─O atoms and the coordination number of Cu/Ni─O in M2-CV are mainly due to the Cu and Ni sites with an oxidation state are arranged in a the unique heterostructure of lattice structure of Nidoped CuO immobilized on CNO frameworks.The density functional theory (DFT) calculation was further performed to understand the superior performance of CuNi alloy pre-catalysts on OER.The electronic structures of the bulk alloy systems are first studied.As shown in Figure4j, the d-band center of Cu upshifts through interaction with Ni, resulting in increased oxygen adsorption of the alloy surface.Then we simulated the OER process on the oxidized surfaces.Compared with the widely studied NiOOH, CuO is commonly considered as an inefficient electrocatalyst.In bulk, the coordination number of Cu is 4. As exposed on the surface, half of the copper atoms are undercoordinated with coordination number of 3 which are supposed to be the reactive sites.In the M2-CV, the CuO nanocluster with Ni doped shows coordination numbers of 2.4/2.4 with Cu─O | 6,007.4 | 2024-03-21T00:00:00.000 | [
"Chemistry",
"Materials Science",
"Engineering"
] |
Can dark matter drive electroweak symmetry breaking?
We consider the possibility of an oscillating scalar field accounting for dark matter and dynamically controlling the spontaneous breaking of the electroweak symmetry through a Higgs-portal coupling. This requires a late decay of the inflaton field, such that thermal effects do not restore the electroweak symmetry after reheating, and so inflation is followed by an inflaton matter-dominated epoch. During inflation, the dark scalar field acquires a large expectation value due to a negative non-minimal coupling to curvature, thus stabilizing the Higgs field by holding it at the origin. After inflation, the dark scalar oscillates in a quartic potential, behaving as dark radiation, and only when its amplitude drops below a critical value does the Higgs field acquire a non-zero vacuum expectation value. The dark scalar then becomes massive and starts behaving as cold dark matter until the present day. We further show that consistent scenarios require dark scalar masses in the few GeV range, which may be probed with future collider experiments.
In addition, the introduction of a dark scalar singlet may solve the Higgs vacuum stability problem. The Higgs vacuum is stable if its self-coupling, λ h , is positive for any scale of energy µ where the minimum of its potential is a global minimum. However, for the measured Higgs mass m h 125 GeV, λ h becomes negative for energy scales around µ ∼ 10 10 − 10 12 GeV [26,27], which are well below the GUT or the Planck scales. This could constitute a problem since it may lead to a possible instability in the Higgs potential (see, for e.g., Refs. [26,28,29] and references therein). The behaviour of λ h is mostly driven by the large contribution of the top Yukawa coupling at one-loop, i.e., strongly depends on the top quark mass. When the coupling constant becomes negative, the renormalization group-improved Higgs potential is V (h) = λ h h 4 4 < 0 and, therefore, the Higgs minimum could be only a local minimum, instead of a global minimum. However, if the time scale for quantum tunneling to this true minimum exceeds the age of the Universe, the Higgs vacuum is only metastable. In fact, Ref. [29] showed that the lifetime for quantum tunneling is extremely long: about the fourth power of the age of the Universe.
There have been several attempts to cure the (in)stability problem of the electroweak vacuum. For instance, Ref. [26] showed that a shift in the top quark mass of about δm t = −2 GeV would suffice to keep λ h > 0 at the Planck scale (this could also be a good reason to motivate more precise measurements of the top quark mass). Other ways include introducing physics beyond the Standard Model. In particular, coupling a scalar singlet with non-zero expectation value to the Higgs may stabilize the electroweak vacuum, provided that the contribution of the coupling between the Higgs and the singlet scalar maintains the Higgs self-coupling positive. This idea has been explored in the literature, and some of them promote this singlet scalar to a dark matter candidate, such as illustrated in Refs. [8,30,31]. In addition, one must consider the stability of the Higgs field during inflation, since de Sitter quantum fluctuations could drive the field to the true global minimum of the potential. This may potentially be avoided if the Higgs field is sufficiently heavy during inflation, which may be achieved by coupling it to other fields such as the inflaton itself [32] or a dark matter scalar as we propose in this work.
We consider a self-interacting dark scalar field, Φ, coupled to the Higgs field, H, through a standard biquadratic "Higgs-portal" coupling, and non-minimally coupled to gravity: where the Higgs potential V (H) takes the standard "mexican hat" shape. As in previous works [19,20], we assume an underlying scale invariance of the theory, spontaneously broken by some mechanism that generates the Planck and electroweak mass scales in the Lagrangian, but which forbids a bare mass term for the dark scalar. It is thus easy to see that, for a sufficiently large value of Φ, the minimum of the Higgs potential will lie at the origin, and it is natural to enquire whether the dark scalar can dynamically drive the spontaneous breaking of the electroweak symmetry 1 .
To prevent thermal effects from restoring the electroweak symmetry after inflation, we focus on scenarios with a late inflaton decay, such that the reheating temperature, T R , is below ∼ 100 GeV. Consequently, inflation is followed by a long matter-dominated epoch while the inflaton oscillates about the origin in an approximately quadratic potential. As we will see in more detail below, the negative sign of the non-minimal coupling to gravity leads to a large expectation value for the dark scalar during inflation, which makes the Higgs field heavy and stabilizes it at the origin during this period. After inflation the dark scalar starts oscillating about the origin in its quartic potential, and its amplitude decreases with expansion, such that at some point it falls below a critical value that allows the Higgs to develop a non-zero vacuum expectation value. The spontaneous breaking of the electroweak symmetry is thus dynamically controlled by the dark scalar, and once it occurs the latter gains a mass and starts behaving as cold (pressureless) dark matter.
This work is organized as follows. In the next section we discuss the dynamics of the dark scalar and the Higgs field during inflation. In section 3 we describe the postinflationary dynamics of both fields, discussing the possibilities of reheating occurring before or after the electroweak symmetry is spontaneously broken. We discuss the consistency of our analysis and parametric constraints in section 4 and present our results for the allowed values of the dark scalar mass and couplings in section 5. We summarize our discussion and main conclusions in section 6.
Inflation
During inflation, the relevant interaction Lagrangian for the dynamics of the Higgs and dark scalar field, assuming they have no significant interactions with the inflaton field, is given by: 2 and the Ricci scalar can be written in terms of the Hubble parameter, where H inf can be related to the tensor-to-scalar ratio of primordial curvature perturbations as: Since the interaction term between φ and R has a negative sign, the dark scalar acquires a vacuum expectation value (vev) during inflation, φ inf , with the minimum of the potential lying at: The dark scalar then provides a large mass to the Higgs field during inflation: We will see later that g/ λ φ ∼ 10 2 if the dark scalar accounts for all dark matter, such that m h H inf for ξ 10 −5 . This large Higgs mass has two related effects. First, it induces an additional quadratic term in the Higgs potential, thus shifting the field value at which the potential becomes unbounded (i.e. λ h < 0) towards values larger than H inf , i.e. above the 10 10 − 10 12 GeV scale at which it becomes unbounded in the Standard Model [27]. Second, it suppresses the Higgs de Sitter quantum fluctuations, which for a light Higgs (m h H inf ) would be ∼ H inf /2π ∼ 10 12 GeV unless the tensor-to-scalar ratio is very suppressed. For a massive Higgs field, the field variance during inflation on super-horizon scales is given by [34]: which, using Eq. (2.5), simplifies to corresponding to an average fluctuation amplitude h 2 10 11 GeV for r 10 −2 and ξ 0.1. Thus, the coupling between the Higgs and the dark scalar can prevent the former from falling into the putative large field true minimum during inflation.
We note that the dark scalar is also heavy during inflation, such that its de Sitter quantum fluctuations, with an amplitude δφ 2 0.05 ξ −1/4 H inf [19,20], have a negligible effect on its expectation value φ inf H inf , the latter setting the initial amplitude for field oscillations in the post-inflationary epoch.
Post-inflationary period
In this model we assume that, after inflation, the inflaton field, χ, does not decay immediately. Instead, the inflaton evolves as non-relativistic matter, while oscillating about the minimum of its potential, and an early matter-era follows inflation until reheating finally occurs. Therefore, there are some significant changes in the dynamics of the Universe with respect to the usual radiation-dominated epoch. The scale factor evolves in time as a ∼ t 2/3 and the Ricci scalar has a non-vanishing value, R = 3 H 2 , unlike its value during the radiation era (R = 0). The evolution of the inflaton energy density is thus given by: where the subscript "end" corresponds to the end of inflation. Note that H end depends on the particular inflationary model. Let us consider, for instance, the case where inflation is driven by a field with a quadratic potential, V (χ) = 1 2 m 2 χ χ 2 , where m χ is the inflaton's mass. The number of e-folds of inflation, after the observable CMB scales become superhorizon, is given by: where χ * is the value of the inflaton field when observable CMB scales become superhorizon during inflation, with χ * χ end . Inflation ends when = M 2 P (V /V ) 2 /2 ∼ 1, yielding χ end √ 2M P , from which we deduce that: Although the quadratic potential is already in some tension with Planck bounds on the tensor-to-scalar ratio [35], we will consider the above relation with N e = 60 henceforth in our discussion, bearing in mind that a different relation between H end and H inf may lead to somewhat different results. Note that this model dependence is nevertheless degenerate with the unknown value of the tensor-to-scalar ratio, which we take as a free parameter. At some stage, the inflaton decay reheats the Universe, establishing the beginning of the radiation-dominated epoch. This scenario resembles the so-called Polonyi problem found in many supergravity models, where the Polonyi field or other moduli decay at late times (see, for e.g., Refs. [36][37][38]). We assume that the inflaton transfers all its energy density into Standard Model degrees of freedom at a reheating temperature T R : where g * R is the number of relativistic degrees of freedom at reheating. The reheating temperature must be above ∼ 10 MeV, as the Universe must be radiation-dominated during Big Bang nucleosynthesis (BBN). As mentioned earlier, we will consider the case where reheating does not restore the electroweak symmetry, such that electroweak symmetry breaking is controlled by the dynamics of the dark matter scalar field, i.e, T R m W 80 GeV. It is important to note that before reheating there is no notion of temperature, since the inflaton has not yet decayed. Using Eqs. (3.1) and (3.4), the number of e-folds from inflation until reheating, N R , reads: where we used N e = 60. The interesting feature of this model is that the dark scalar will control a non-thermal EWSB. From Eq. (2.1), it is easy to see that the minimum of the Higgs potential occurs at EWSB then takes place when the amplitude of the field becomes smaller then the critical value: noting that, in a few e-folds, the Higgs field should attain its final vacuum expectation value |h| = v. In the following subsections, we will study the dynamics of the dark scalar when reheating occurs after or before EWSB. Note, however, that N R is determined solely by r and T R , being independent of when EWSB takes place. Hence, our model has five free parameters: r, ξ, g, λ φ and T R .
Reheating after EWSB
The first scenario we study is the one where reheating occurs after EWSB, as illustrated in figure 1. Before EWSB, the quartic term dominates the energy density of the dark scalar and it starts oscillating about the origin with initial amplitude φ inf . The amplitude decays as φ ∝ a −1 , such that the field behaves as dark radiation, ρ φ ∝ a −4 . Note that R ∝ H 2 ∝ a −3 , so that the effects of the non-minimal coupling to gravity decay faster than those of the quartic self-interactions and may thus be neglected. We will assume, for simplicity, that once the electroweak symmetry is spontaneously broken and the field becomes massive the associated quadratic term in the scalar potential becomes dominant, such that the field behaves as cold dark matter (CDM) from EWSB onwards. Therefore, the dark scalar exhibits two behaviors: where a c is the value of the scale factor at which EWSB takes place. At EWSB, we have: and, therefore, the number of e-folds from inflation until EWSB, N EW , is given by (3.10) Once reheating occurs, the Universe enters the usual radiation-dominated epoch. Thus, we can now consider a temperature and the number of dark matter particles in a comoving volume, n φ /s, becomes constant. The dark scalar amplitude at reheating is, then: Introducing the last equation into Eq. (3.11), the amplitude of the field at reheating becomes: (3.13) The number of particles in a comoving volume at T R is, then: where m φ stands for the dark scalar mass once the electroweak symmetry is spontaneously broken and is given by: The present dark matter abundance then reads: where g * 0 , T 0 and H 0 are the present values of the number of relativistic degrees of freedom, CMB temperature and Hubble parameter, respectively. Replacing Eq. (3.13) into the last expression and fixing Ω φ,0 = 0.26, we then obtain a relation between g and λ φ : (3.17)
Reheating before EWSB
The second putative scenario we should consider is the case where reheating occurs before EWSB, as illustrated in figure 2. Since the number of e-folds from inflation until reheating does not depend on when EWSB takes place, N R is given by Eq. (3.5) as in the previously discussed scenario. Similarly, N EW only depends on φ inf and φ c and, therefore, it is given by Eq. (3.10). The difference between this and the previous scenario is that N EW should now exceed N R . The dark scalar behaves like dark radiation from reheating until EWSB, after which n φ /s becomes constant. From reheating onwards, the Universe enters the usual radiationdominated epoch and R = 0.
The amplitude of the field at reheating is different from the previous scenario: and now we have a defined temperature and can write the amplitude of the field as a function of the temperature: This can be used to compute the temperature at which EWSB occurs, T c : At T c the dark scalar stops holding the Higgs at the origin. Notice, however, that T c must be smaller than the usual T EW ∼ 80 GeV, so that the dark scalar can control the EWSB and the latter is not restored by thermal effects. By proceeding as in the previous subsection, since n φ /s is constant as soon the field starts behaving as CDM, the present dark matter abundance is given by: Setting Ω φ,0 = 0.26 we then obtain for the temperature at which the field amplitude falls below the critical value: GeV . Hence, we conclude that, for reheating to occur before EWSB, T c must be well above T EW ∼ 80 GeV. This is not consistent with our reasoning given that, at that temperature, the Higgs thermal mass is still sufficiently large to hold the latter at the origin, such that EWSB does not occur at T c as assumed and, consequently, the dark scalar remains massless and behaves as dark radiation, as opposed to our starting assumption. In the remainder of this paper, we will thus focus only on the case where reheating occurs after EWSB, given that in this scenario the dark scalar, in addition to being a viable dark matter candidate, can also control a non-thermal EWSB.
Consistency analysis
In analyzing the dynamics of the dark scalar and of the Higgs field both during and after inflation we have made several technical assumptions. In this section, we discuss the parametric constraints imposed by these assumptions and also by the properties of the Higgs boson measured at the Large Hadron Collider (LHC). First, our scenario assumes that inflation is driven by a scalar field, χ, that is neither the dark scalar nor the Higgs field. Therefore, we have to ensure, in particular, that the dark scalar does not affect the dynamics of inflation. The dark scalar's contribution to the effective potential during inflation is given by: Requiring that this does not significantly reduce the inflationary energy density V (χ) 3H 2 inf M 2 P then implies the condition: which constrains the allowed values of the non-minimal coupling ξ and the self-coupling λ φ , depending on the tensor-to-scalar ratio, i.e. the scale of inflation: Second, we have assumed that the dark scalar field starts behaving as CDM as soon as the electroweak symmetry is spontaneously broken, i.e. when the field amplitude falls below the critical value φ c . This means that the quadratic term has to dominate over the quartic term at EWSB, that is, g 2 v 2 φ 2 c / λ φ φ 4 c > 1, which translates into the following condition: Finally, radiative corrections to the quartic coupling from the Higgs-portal coupling should be small, unless we accept some degree of fine-tuning: (4.5) From the experimental point of view, the Higgs may decay into dark scalar pairs with a decay width Γ h→φφ 1 8π leading to a Higgs branching ratio into invisible particles, assuming Γ h→inv = Γ h→φφ : Current limits from the LHC establish an upper bound for the branching ratio Br inv < 0.23 [39], and using Γ h = 4.07 × 10 −3 GeV [40], this yields an upper bound on the Higgs-portal coupling: g < 0.13 , (4.8) which translates into an upper bound m φ 22.6 GeV. From the dynamical perspective, we have also implicitly assumed that the dark scalar field remains in the form of an oscillating condensate, such that processes that may lead to its evaporation and subsequent thermalization (which would yield a WIMP-like dark matter candidate) must be inefficient, as we discuss in detail below.
Condensate evaporation
The dark scalar provides mass to the Higgs field during the period before EWSB. Since φ is oscillating, this could induce oscillations of the Higgs mass. This may pose a problem, since if the Higgs mass m h < 3 λ φ φ rad , Higgs production by the oscillating condensate is kinematically allowed and lead to the condensate's evaporation.
A solution to this problem is to provide initial conditions to the field such that its absolute value, and hence the Higgs mass, does not oscillate. This is possible if the dark scalar oscillates in the complex plane, as e.g. in the Affleck-Dine mechanism for baryogenesis [41,42], with no oscillations in the special cases where φ = A e ±iωt .
To generate the required angular momentum in field space, we need to introduce terms in the scalar potential that depend on the phase of the field and not only on its modulus, i.e. which violate the global U(1) symmetry of the original Lagrangian, Eq. (1.1). Since gravity is expected to violate such global symmetries, it is natural to envisage such terms in the gravitational sector of the model, in particular the non-minimal coupling to curvature as well as Planck-suppressed non-renormalizable operators: where n is a positive even integer, such that the Lagrangian is still invariant under a discrete Z 2 symmetry that ensures the stability of the dark scalar, and c is an O(1) dimensionless coupling. We note that both the non-minimal coupling to curvature and the non-renormalizable term decay faster than the quartic self-interaction term, such that they play a negligible role in the late time dynamics of the field. However, since the value of the Ricci scalar during inflation differs from its value at the end of inflation, the phase of φ at the minimum is different during and after inflation, thus making the dark scalar oscillate in the complex plane. This prevents |Φ| from oscillating significantly, such that the Higgs never becomes light enough to be produced, while ρ φ still redshifts as dark radiation before EWSB. Another way of solving the problem is to couple the Higgs field directly to the inflaton, χ, with an interaction term of the form [32] 1 2 g 2 χ χ 2 |H| 2 , (4.10) where g χ is the Higgs coupling to the inflaton, which yields an additional contribution to the Higgs mass, ∆m h = g χ χ/ √ 2 that is present until reheating occurs (after EWSB). Since φ ∝ a −1 and χ ∝ a −3/2 before EWSB, this contribution decays faster than the dark scalar's contribution to the Higgs mass, which we assume to be dominant. Nevertheless, this contribution may be sufficient to kinematically block the production of Higgs particles by the oscillating dark scalar, provided that it exceeds the latter's oscillation frequency before EWSB: which imposes a lower bound on g χ . The inflaton's amplitude at EWSB, χ c , is where χ end is the inflaton's amplitude at the end of inflation. Introducing the last expression into Eq. (4.11) and using the relation between g and λ φ , we obtain a lower bound on g χ : Notice that smaller values of r, corresponding to lower inflationary scales, allow for smaller couplings g χ . Also, note that both the inflaton and the dark scalar's contribution to the Higgs mass are oscillatory in nature. However, since they will not, in general, be in phase, the Higgs mass should not oscillate significantly, thus preventing its production.
Since reheating can only consistently occur after EWSB as we have shown above, and hence there is no cosmic thermal bath yet at this stage, the only other possible channel for the evaporation of the dark scalar field is the perturbative production of φ-particles by the oscillating background condensate. The dark scalar behaves like radiation until EWSB and the condensate decay width is given by [19,20]: 14) where, at EWSB, φ rad = φ c . Condensate evaporation is then avoided if this never exceeds the Hubble expansion rate until EWSB, noting that after EWSB this production channel is blocked since the dark scalar becomes massive (see e.g. [19,20]): Since the Universe is still in a matter-dominated era at EWSB, the Hubble parameter can be computed using the expression for the inflaton's energy density given in Eq. (3.1): Therefore, from Eq. (4.15) we find that g < 6 × 10 −13 r 0.01 (4.17) and using the relation between g and λ φ (Eq. (3.17)), the upper bound on g reads (for N e = 60): (4.18)
Results
In this section we summarize our results, taking into account all the different constraints analyzed earlier. We present the results for the regions in the (λ φ , g) plane where all model constraints are satisfied, namely Eqs. (4.2) -(4.5) and (4.17). We choose to represent the results for values of the tensor-to-scalar ratio r = 10 −2 and non-minimal coupling ξ = 0.1, 1, as illustrated in figure 3. In figure 3, we can see that there is a window where our model can explain all of the present dark matter abundance, for dark scalar masses larger than the ones we have obtained in previous Higgs-portal scenarios with an oscillating scalar field and an underlying scale invariance [18][19][20]. For instance, we can see that g ∼ 10 −2 is allowed, which corresponds to m φ ∼ 1 GeV. We may conclude that an early matter-era precludes sub-GeV dark scalar masses, mainly since these would lead to super-Planckian dark scalar values during inflation that could affect the latter'ss dynamics.
In fact, it is possible to get an analytic expression for the window of possible values for g and λ φ . Hence, since the dark scalar cannot affect inflation (Eq. (4.2)) and using the relation between the Higgs-portal coupling and the dark scalar's quartic coupling (Eq. (3.17)), the constraint on g becomes g > 10 3 T R 10 GeV In turn, requiring that the field behaves like CDM at EWSB (Eq. (4.4)), and using the relation between couplings (Eq. (3.17)), we find a lower bound on g: and, consequently, a lower bound on the mass The no fine-tuning constraint allows only Higgs portal couplings below the following threshold: g < 16 π 2 1/2 4 × 10 2 T R 10 GeV Taking into account all these restrictions, along with the bound coming from avoiding condensate evaporation, Eq. (4.18) and the LHC bound on the Higgs invisible partial decay width, Eq. (4.8), we may alternatively plot the allowed parametric regions in the (m φ , T R ) plane for different values of the non-minimal coupling to gravity and tensor-to-scalar ratio, as illustrated in figure 4. . Allowed values for the dar scalar mass as a function of the reheating tempeature, for 10 MeV < T R < 80 GeV and considering different values for the non-minimal coupling to curvature ξ and tensor-to-scalar ratio r.
From figure 4 we may conclude that our model predicts masses for the dark scalar in the few GeV range, depending on the values of the tensor-to-scalar ratio and non-minimal coupling chosen. These may be within the reach of the LHC or its successors in the near future, since for instance Br inv 2 × 10 −3 for m φ 6 GeV, which is not too far from the current experimental limit (Eq. (4.7).) Notice, however, that large values of the nonminimal coupling to gravity, permitting heavier dark scalars, are only allowed for lower values of r, i.e. in scenarios with a low inflationary scale.
Conclusions
In this work, we have analyzed the possibility of an oscillating scalar field, accounting for all the dark matter in the Universe, driving a non-thermal spontaneous breaking of the electroweak symmetry. The dark scalar is coupled to the Higgs field through a standard "Higgs-portal" biquadratic term, has no bare mass terms due to an underlying scale invariance of the theory, and has a negative non-minimal coupling to curvature. The latter, in particular, allows the dark scalar to develop a large expectation value during inflation. This holds the Higgs field at the origin both during and after inflation, until the dark scalar's oscillation amplitude drops below a critical value at which EWSB takes place. This prevents, in particular, the Higgs field from falling into the putative global minimum at large field values during inflation, ensuring at least the metastability of the electroweak vacuum.
The proposed scenario assumes a late decay of the inflaton field, such that reheating does not restore the electroweak symmetry, while the reheating temperature is still large enough to ensure a successful primordial nucleosynthesis 2 . Therefore, the Universe is still dominated by the inflaton field for a parametrically long period after inflation, while it oscillates about the minimum of its potential and behaves as a pressureless fluid. In fact, we have shown that consistent scenarios require reheating to occur only after EWSB, such that the latter occurs in the inflaton matter-dominated epoch essentially in vacuum.
Compared to other scenarios of scalar field dark matter where the Higgs is the only source of mass for the dark scalar field [18][19][20], we have shown that this allows for larger Higgs-portal couplings and hence dark scalar masses, since there are no thermalized particles in the Universe that could lead to an efficient evaporation of the scalar condensate until EWSB takes place. The dark scalar's oscillations, while it behaves as dark radiation, could themselves lead to particle production, but this can either be kinematically blocked in the case of Higgs production or made less efficient by the faster expansion of the Universe in a matter-dominated regime, as compared to the standard radiation epoch.
Overall, we have concluded that consistent scenarios where the dark scalar (1) does not affect the inflationary dynamics, (2) has technically natural values for its self-coupling (i.e. requiring no fine tuning), and (3) starts behaving as cold dark matter after it breaks the electroweak symmetry, require dark scalar masses in the few GeV range, unless inflation occurs much below the grand unification scale. This looks promising from the experimental perspective, since it allows for Higgs invisible branching ratios 10 −3 , which may be within the reach of colliders in a hopefully not too distant future.
We thus reply "Yes, it can" to the fundamental question posed in this work and hope that testing this idea may shed a new light on the nature of dark matter and on its role in the cosmic history. | 6,762 | 2018-11-21T00:00:00.000 | [
"Physics"
] |
HPRT Deficiency Coordinately Dysregulates Canonical Wnt and Presenilin-1 Signaling: A Neuro-Developmental Regulatory Role for a Housekeeping Gene?
We have used microarray-based methods of global gene expression together with quantitative PCR and Western blot analysis to identify dysregulation of genes and aberrant cellular processes in human fibroblasts and in SH-SY5Y neuroblastoma cells made HPRT-deficient by transduction with a retrovirus stably expressing an shRNA targeted against HPRT. Analysis of the microarray expression data by Gene ontology (GO) and Gene Set Enrichment Analysis (GSEA) as well as significant pathway analysis by GeneSpring GX10 and Panther Classification System reveal that HPRT deficiency is accompanied by aberrations in a variety of pathways known to regulate neurogenesis or to be implicated in neurodegenerative disease, including the canonical Wnt/β-catenin and the Alzheimer's disease/presenilin signaling pathways. Dysregulation of the Wnt/β-catenin pathway is confirmed by Western blot demonstration of cytosolic sequestration of β-catenin during in vitro differentiation of the SH-SY5Y cells toward the neuronal phenotype. We also demonstrate that two key transcription factor genes known to be regulated by Wnt signaling and to be vital for the generation and function of dopaminergic neurons; i.e., Lmx1a and Engrailed 1, are down-regulated in the HPRT knockdown SH-SY5Y cells. In addition to the Wnt signaling aberration, we found that expression of presenilin-1 shows severely aberrant expression in HPRT-deficient SH-SY5Y cells, reflected by marked deficiency of the 23 kDa C-terminal fragment of presenilin-1 in knockdown cells. Western blot analysis of primary fibroblast cultures from two LND patients also shows dysregulated presenilin-1 expression, including aberrant proteolytic processing of presenilin-1. These demonstrations of dysregulated Wnt signaling and presenilin-1 expression together with impaired expression of dopaminergic transcription factors reveal broad pleitropic neuro-regulatory defects played by HPRT expression and suggest new directions for investigating mechanisms of aberrant neurogenesis and neuropathology in LND and potential new targets for restoration of effective signaling in this neuro-developmental defect.
Introduction
Lesch-Nyhan disease (LND) is a generalized monogenic inborn error of metabolism caused by deficiency of the purine reutilization enzyme hypoxanthine-guanine phosphoribosyltransferase (HPRT) activity. The disease is characterized by two major sets of defects; i.e., systemic purine metabolism expressed as hyperuricemia, gouty arthritis and renal calculi [1], and dysfunction of basal ganglia and other neural pathways associated with the hallmark biochemical defect in HPRT deficiency; i.e., markedly reduced neurotransmitter dopamine (DA) in the basal ganglia in both the human and mouse HPRT-deficient brain and resulting dystonia [2]. Evidence has been presented that the basal ganglia DA defect is associated with intrinsic defects in DA neurons [3,4]. Although the mechanisms of the purine metabolic aberrations are well understood, the mechanisms by which they lead to neural dysfunction are poorly understood.
Many studies have examined the connection between defective purine salvage and the loss of basal ganglia DA in LND and have suggested a number of possible mechanisms by which HPRT deficiency might lead to defective development or function of the DA pathways and DA neurons, including abnormal nigrostriatal axonal arborization or early axonal and neuronal degeneration [5][6][7][8][9], secondary metabolic changes that may increase oxidative stress [10,11] or impaired proteasomal function and protein misfolding that may generate molecules particularly toxic in DA neurons [12]. The relationship between HPRT deficiency and impaired DA neuron development has been partially clarified by recent studies that have demonstrated that HPRT deficiency leads to broad alterations of DA neuron-related transcription factors and aberrant neurite outgrowth and cellular morphology in mouse MN9D DA neuroblastoma and human NT2 embryonic carcinoma undergoing neuronal differentiation in vitro [13,14]. Moreover, a more recent report has confirmed similar transcriptional aberrations in HPRT-deficient human neural stem cells [15]. These published studies point to aberrant generation of DA neurons in HPRT deficiency, but a detailed understanding of the aberrant regulation of DA neuron development and function in HPRT deficiency awaits clarification of the complex interplay among multiple transcription and signaling factors that determines generation and differentiation of the DA pathways and midbrain DA neurons.
We have previously published an initial comparative characterization of the transcriptomes of normal and HPRT-deficient mouse striata and normal and LND human fibroblasts [16]. In those studies, we identified a number of genes and gene sets whose expression is dysregulated in HPRT-deficient mouse striatum. To avoid the inevitable interpretational difficulties caused by genetic heterogeneity of individual patient and normal control samples, we have now examined the transcriptional aberrations in the less complex system of wild type (WT) human fibroblasts and human SH-SY5Y neuroblastoma cells in which HPRT expression is efficiently knocked down by transduction with a retrovirus vector expressing a short hairpin RNA targeted to HPRT. We have identified a number of significantly altered signaling pathways in HPRT-deficient cells, and in this report, we concentrate on aberrations related to the Wnt and presenilin (PS) -1 signaling pathways.
Wnt signaling mediates and controls many aspects of vertebrate development approximately 19 different mammalian Wnt glycoproteins regulate many biological processes including embryonic patterning, stem cell self renewal and differentiation, neurogenesis and neural pathway development, tumorigenesis, and others [17][18][19][20][21]. Wnt signaling is now recognized to play a key regulatory role in proliferation and differentiation of DA precursors during ventral midbrain (VM) neurogenesis, affecting many processes required for the development of DA neurons from neural stem cells and specification of their properties and functions, including axon guidance and neurogenesis and synapse formation. Recently, evidence has accumulated that dysfunction of these Wnt signaling pathways plays a role in the induction of damage to the DA pathways at the heart of Parkinson's disease, and special attention has been paid to the key role played by Wnt signaling in regulating DA neurogenesis [22], at least partly through the regulation of key transcription factors such as Lmx1a which in turn activate downstream transcription factors to promote differentiation and maturation of the mesencephalic DA neurons. A variety of Wnt molecules such as Wnt1, Wnt3A, Wnt 5A and others have all been shown to promote different aspects of VM morphogenesis such as enhanced DA precursor differentiation, increased survival, neurogenesis and other effects. Signaling by these and other forms of Wnt occurs largely by three separate major pathways; i.e., the canonical pathway mediated by b-catenin and the two noncanonical WNT-PCP and the Wnt-Ca ++ pathways [23,24].
A second signaling pathway with important effects on the Wnt signaling pathway is that associated with PS-1. Defects in this pathway not only play a causal role in forms of familial Alzheimer's disease [25,26] but also interact with the canonical Wnt signaling pathway by stabilizing b-catenin and regulating its transcription [26]. These interactions between the Wnt and PS-1 signaling pathways suggest that they may cooperate in the pathogenesis of some human neurodegenerative and neurodevelopmental diseases, possibly even playing a role in LND. If PS-1 dysregulation indeed is a factor in the development of neuropathology in HPRT deficiency, it seems unlikely that its role is through accumulation of a toxic Ab 42 , since no histopathological evidence has ever been provided for accumulation of Ab or other storage materials in the brain of LND patients or in the HPRT knockout mouse.
We now provide evidence that HPRT deficiency produces aberrations in both the canonical Wnt/b-catenin signaling as reflected by elevated levels of cytosolic phosphorylated b-catenin, markedly impaired nuclear transport of b-catenin for faithful gene regulatory function and marked instability of the 23 kDa Cterminal fragment (CTF) of PS-1. These aberrations are associated with the down-regulation of major downstream transcription factor effectors of Wnt regulation that are necessary for effective generation of DA neurons, including Lmx1a and Engrailed 1 (En1). These findings are consistent with a working model of HPRT-induced neuropathology in which the primary underlying purine metabolic aberrations characteristic of HPRT deficiency produce coordinate dysregulation of the key Wnt and PS-1 signaling pathways that in turn collaborate to cause aberrant development of DA pathways and disturbed DA neurogenesis, at least partly by dysregulating key neuronal transcription factors necessary for faithful DA neural development. These findings may provide a new understanding of the neuropathology associated with human HPRT-deficiency and may point to new genetic or metabolic targets for delineating the pathogenic mechanisms and potential therapies for this and possibly other neurodevelopmental or neurodegenerative disorders associated with dysregulated Wnt and PS-1 signaling.
HPRT knockdown
The degree of HPRT gene knockdown in both the normal human fibroblasts and the SH-SY5Y cells was approximately 90%, as estimated by quantitative PCR (qPCR) measurements and Western blotting analysis (data not shown). Assays for HPRT enzymatic activity showed a reduction of approximately ,75% in the knockdown fibroblast cells and SH-SY5Y cells (data not shown). The apparently slightly greater degree of transcriptional knockdown of HPRT compared with HPRT protein and quantitative HPRT enzyme activity may reflect the known high level of HPRT protein stability and is consistent with results of a previous study that reported a similar discrepancy between HPRT transcription and protein level [27]. We consider the 90% knockdown of HPRT gene expression to be sufficient to obtain a useful picture of general transcriptional aberrations in HPRT deficiency since we have previously demonstrated that knockdown of this degree was sufficient to dysregulate DA neuron transcription factor genes and to impair aspects of neurogenesis during in vitro differentiation of NT2 cells toward the neuronal phenotype [14].
Microarray studies
Of the 48,803 markers that we compared in the triplicate experiments described in MATERIALS AND METHODS, we detected a total of 286 genes whose expression was dysregulated by a factor of .2-fold (Table S1). Of that total, 117 were upregulated and 169 were down-regulated in HPRT knockdown fibroblasts. The down-regulated genes included the HPRT gene that was down-regulated by 5.0-6.6 fold (Table S1), in general agreement with the qPCR results of normal and knockdown cells (above). The data demonstrated in this result have been deposited in NCBI's Gene Expression Omnibus and are accessible through GEO Series accession number GSE24345 (http://www.ncbi.nlm. nih.gov/geo/query/acc.cgi?acc=GSE24345).
Gene ontology (GO) and gene set enrichment analysis (GSEA)
A number of the major cellular processes dysregulated by HPRT deficiency and identified by GO and GSEA analysis and further characterized by the publicly available PANTHER analytic software (www.pantherdb.org) are listed in Tables 1 and 2. To correlate gene expression changes with overall biological function, the 286 differentially expressed genes were assigned to established GO classifications by GeneSpring GX10. In this classification, we found significantly aberrant 9 GO terms (p,0.05), particularly those related to cellular surface regions or cellular developmental processes (Table 1). For the results of GSEA analysis, the 286 differentially expressed genes were assigned to the functional gene sets of GSEA by GeneSpring GX10. For this clustering, we used the relaxed false discovery rate (FDR) of ,0.4 and p#0.1. With the relaxed FDR, we identified three GSEA gene sets including Alzheimers_disease_dn (Table 1). By means of GeneSpringGX10 analysis of all 286 dysregulated genes indicated above, we consistently identified the Wnt pathway to be one of the most consistently and significantly aberrant in HPRT-deficient cells ( Table 2). After analysis with PANTHER, a number of additional pathways were also identified, including a number of pathways relevant to neuronal function particularly that of the Alzheimer's disease/presenilin signaling pathway, as indicated in Table 2. For the purposes of this present study, we chose to focus on the Wnt and PS-1 pathways for further analysis, as described below.
Dysregulated components of the canonical Wnt signaling pathway
Because a major mediator of the canonical Wnt signaling is bcatenin, we used Western blotting methods to characterize the content and cellular localization of total and phosphorylated bcatenin in control (Lux-ND) and HPRT knockdown SH-SY5Y cells (sh2-ND) in the basal state and after 12 days of in vitro differentiation, as illustrated in Figure 1. The left panels represent cytosolic b-catenin while those on the right represent the nuclear fractions. In the basal pre-differentiation state, control cells and knockdown cells contain similar levels of cytosolic b-catenin, although the content in knockdown cells is reproducibly slightly reduced. A portion of the b-catenin in both cells is phosphorylated and the amount of phosphorylated protein is markedly increased in the knockdown cells, indicating an apparent increase in phosphorylation of b-catenin in knockdown cells.
After a 12-day period of retinoic acid differentiation, total cytosolic b-catenin in control cells is relatively unchanged, although the amount of the phosphorylated form is moderately reduced. In contrast, HPRT knockdown cells demonstrate a marked increase in both total and phosphorylated cytosolic bcatenin. Although differentiated control cells demonstrate a significant degree of nuclear translocation of unphosphorylated b-catenin, HPRT-knockdown cells reproducibly are markedly deficient in nuclear translocation.
Effects of aberrant Wnt signaling on downstream transcription factors
The canonical Wnt pathway is one of the key signaling pathways that regulate expression of important transcription factors such as Lmx1a and En1&2 that are necessary for DA neuronogenesis. Because DA deficit is a hallmark defect in HPRT deficiency and because we and other laboratories have previously demonstrated that HPRT deficiency is accompanied by dysregulated expression of several of these important transcription factors [13,14], including Lmx1a and En1&2, we examined the expression of Lmx1a and En1 in basal and differentiated control and knockdown cells ( Figure 2). In both the undifferentiated and the differentiated cells, expression of En1 and Lmx1a is markedly reduced, to levels exceeding 50% in the case of Lmx1a in differentiated cells. This result is consistent with a down-regulated Wnt induction of expression of these transcription factors.
Aberrant PS-1 expression
The use of the PANTHER analytic method has identified the Alzheimer's disease/presenilin pathway to be the most severely dysregulated pathway in the HPRT knockdown cells (Table 2). Membrane-bound PS-1 is known normally to undergo proteolytic cleavage into N-terminal fragment (NTF) of ca. 30 kDa and CTF of ca. 20 kDa forms that then combine with other components in a gamma-secretase complex to carry out the normal cleavage of amyloid precursor protein (APP) to Ab. In some forms of familial Alzheimer's disease caused by PS-1 mutations, APP cleavage is aberrant and produces increased levels of the toxic Ab 42 cleavage product that accumulates as amyloid deposits characteristic of Alzheimer's disease.
To further characterize the mechanism of dysregulated PS-1 expression in HPRT knockdown cells, we used Western blotting methods to identify PS-1 expression in control SH-SY5Y cells before and after differentiation ( Figure 3). Both before and after 12 days of differentiation, the amounts of full-length PS-1 and of the NTF protein are similar in the membrane fractions of control and HPRT knockdown cells. In control cells, most of the PS-1 protein exists in the form of the processed ,30 kDa NTF and 20-25 kDa CTF. However, the HPRT-deficient cells reveal a virtually complete absence of the 23 kDa form of the CTF. Since production of the PS-1 protein and the overall processing seem un-impaired, we interpret the loss of 23 kDa CTF to indicate marked instability of that processed fragment of PS-1 in cells deficient in HPRT expression. Because the antibody to full-length PS-1 may not recognize the NTF efficiently, we repeated the immunoblotting studies with a second antibody specific for the NTF (see Methods). The results of this study confirmed that the expression of the NTF in undifferentiated HPRT-knockdown cells is indistinguishable from that in control cells but that, in differentiated cells, the amount of NTF is markedly reduced in both cell types, particularly in the HPRT knockdown cells where NTF is virtually undetectable (Figure 3).
In the present study, we have attempted to characterize a potential physical interaction between Wnt/b-catenin and presenilin-1. It is known that the CTF of PS-1 interacts with and stabilizes b-catenin. A recent study has reported that gammasecretase activity does not affect PS-mediated regulation of bcatenin [28]. To test the ability of PS-1 and b-catenin to interact and to participate in formation of complexes that might shed light on their coordinate dysregulation by HPRT, we used coimmunoprecipitation methods to determine if HPRT deficiency interferes with the ability of b-catenin and PS-1 function coordinately in a complex. Figure 4 presents Western blots of proteins precipitated either by anti-b-catenin or by anti-PS-1. In both cases, we find that both proteins are immunoprecipitated by either antibody, indicating that they are part of a common complex and may thereby influence their joint functions. The amounts of both proteins are moderately reduced after 12 days differentiation, although the reduction is more pronounced in the HPRT-knockdown cells.
As further validation of the aberrations of PS-1 signaling in HPRT deficiency, we examined cultures of primary fibroblast from 2 LND patients, one with no detectable residual HPRT activity (WM) and one with 2.5% residual enzyme activity (LW). Figure 5 illustrates the effect of HPRT deficiency on PS-1 expression in these cells compared with control normal human fibroblasts (WT). Most of the PS-1 in WT cells is in the fulllength (FL) form and very little PS-1 protein is present either as processed NTF or CTF. In contrast, both LND cells demonstrate markedly increased levels of processed NTF and CTF, indicating a markedly increased level of PS-1 proteolytic processing or proteolytic fragment stability compared with WT cells.
Discussion
The goals of this study were to identify mechanisms underlying the mechanisms of dysregulated neurogenesis and aberrant DA pathway development in HPRT-deficiency. While our present results do not definitively identify mechanisms of neural disruption in LND, they do identify for the first time several signaling pathways that are markedly dysregulated in HPRT deficiency and that represent important new targets for studying the mechanisms of pathogenesis and possibly even of therapy for this disorder.
Because studies of aberrant gene expression in terminally differentiated cells of different genetic backgrounds often fail to clarify mechanisms that operate in less differentiated developing systems such as those of the developing mammalian CNS, we elected first to identify gene expression aberrations in HPRTknockdown human fibroblasts, then to validate them in HPRTdeficient human DA cells such as the SH-SY5Y neuroblastoma and then finally to validate candidate gene defects in cells derived from LND patients. Toward these ends, we have used qPCR and Western blotting methods to demonstrate that HPRT deficiency in differentiating human neuroblastoma cells indeed does dysregulate expression of several vital CNS neuronal and developmental signaling pathways in which we had predicted dysregulated expression by GO and GSEA analysis of microarray-based global gene expression analyses of control and HPRT-knockdown human fibroblasts. The extent to which these defects clarify mechanisms of pathogenesis in LND patients is still to be established, but these results demonstrate clearly in fully differentiated normal and LND human fibroblasts and in dopaminergic SH-SY5Y neuroblastoma cells that HPRT plays strong developmental roles in many functions and processes vital for programs key to CNS development, a valuable insight into HPRT neuronal regulatory function whether or not it fully explains the disease phenotype. We conjecture that these robust dysregulations in cultured HPRT-deficient cells may have important neurologically deleterious effects in the human disease.
As we have confirmed by functional studies, the transcriptional characterization of human fibroblasts knocked down for HPRT expression through RNA interference has correctly identified aberrations in HPRT deficiency of a number of important signaling pathways, especially those identified in gene ontology categories as the Wnt signaling and the Alzheimer's disease/ presenilin pathways that play roles in the pathogenesis of two common human neurodegenerative and neurodevelopmental disease -Parkinson's and Alzheimer's diseases. Because we found these pathways to be convincingly dysregulated in HPRTknockdown fibroblasts (Table 2), we extended the gene expression analysis to another cell type; i.e., the SH-SY5Y neuroblastoma cell line that can be differentiated into neuronal cells in vitro. The DA SH-SY5Y neuroblastoma cells have served as valuable and productive model systems to characterize mechanisms of neurogenesis in vitro and, in the present study, convincingly demonstrate major disturbances of expression in HPRT knockdown cells (Figures 1 and 3).
Our current study clearly demonstrates that knockdown of HPRT leads to coordinate dysregulation of several vital signaling pathways that play key roles in regulating aspects of neural pathway development and neurogenesis and for that reason, are attractive candidates for playing an important role in the neurodevelopmental aberrations of LND. For these studies, we have focused on two HPRT-regulated signaling pathways with clear implications for neural function; e.g., the canonical Wnt and the presenilin pathways. Many previous studies have revealed important roles of Wnt signaling in the generation of DA pathways and DA neurons. Of special interest is the known role of the Wnt/ b-catenin signaling pathway in regulating hippocampal and midbrain DA neurogenesis [17][18][19][20][21][22]. It is also known that treatment of prenatal DA progenitors with inhibitors of the bcatenin kinase GSK3b enhances DA neurogenesis [29] and that transplantation of rodent fetal neural stem cells treated with Wnt5a results in enhanced survival, differentiation and functional integration of stem cell-derived DA neurons in an animal model of Parkinson's disease [22]. As is clearly shown in Figure 1, HPRT deficiency in SH-SY5Y cells impedes nuclear transport of cytosolic b-catenin where it is required to carry out its gene regulatory roles.
To try to establish a possible direct relevance of these findings in SH-SY5Y cells, we have examined several primary fibroblast cultures from LND patients. In the case of b-catenin, we have so far been unable to identify consistent aberrations in b-catenin expression and nuclear transport, and we are pursuing those studies with larger numbers of control and LND cells. Nevertheless, we infer from these results that the down-regulated Wnt signaling pathway demonstrated in this study may contribute to defective generation and function of DA neurons, possibly by down-regulating downstream transcription factors such as Lmx1a and En-1 ( Figure 2) and others whose expression is necessary for effective generation of DA neurons [18][19][20][21]30].
In the case of aberrant PS-1 signaling in HPRT-deficient SH-SY5Y cells, we have found little change in the level of full-length or NTF fragment of PS-1 in the HPRT-knockdown cells compared with wild type cells, but virtual absence of the processed 23 kDa CTF in membrane-fractions of HPRT knockdown SH-SY5Y cells, both before and after retinoic acid differentiation (Figure 3). In contrast, LND fibroblasts display little or no PS-1 processing in Figure 5. Western blot analysis of PS-1 in fibroblasts from LND patients. WT cells demonstrate very low levels of PS-1 processing to the Nand C-terminal fragment (NTF and CTF, respectively). In both LND cells, processing to both the NTF and CTF proteolytic products is markedly increased compared with that seen in HPRT-positive control WT human fibroblasts. doi:10.1371/journal.pone.0016572.g005 control WT cells but markedly increased PS-1 processing to the NTF and CTF in the two patients fibroblast cells ( Figure 5). These aberrant PS-1 processing steps in HPRT-deficiency LND may provide useful clues to previously unsuspected purinergic contributions to defective processing of APP as found in familial Alzheimer's disease [25,26,31,[33][34][35]. We interpret our results to suggest that aberrant PS-1 processing in HPRT deficiency may reveal additional mechanisms of neural damage other than the Ab 42 accumulation in Alzheimer's Disease. The NTF and CTF of PS-1 form a heterodimer that is normally incorporated into a gamma-secretase complex whose aberrant function in some forms of familial Alzheimer's disease is implicated in aberrant APP cleavage and over-production of toxic Ab 42 or an increased ratio of Ab 42 to Ab 40 [25]. It is the CTF of PS-1 that has been shown to play a key role in determining pathogenicity in Ab 42 accumulation, since it has been shown that Ab 42 accumulation does not occur in cells expressing only the mutated NTF [32]. The formation of the PS-1 heterodimer also has been reported to enhance its stability. In the absence of heterodimer formation, it has been thought that the pathogenic CTF is degraded and its pathogenicity reduced or lost. Furthermore, phosphorylation of the CTF of PS-1 [33], in particular the 23 kDa C-terminal isoform has been also reported to decrease Ab production [34]. Our PS-1 results suggest that a decrease or loss of HPRT expression or mimicking the altered purine levels and pool sizes in HPRT deficiency may destabilize PS-1 heterodimer, allow the degradation of the CTF. We conjecture that purine modulations of purine pathways may eventually provide avenues for therapy or prevention of some neurodegenerative or neurodevelopmental diseases associated with aberrant presenilin-1 function and/or Ab processing.
As a further test of the functional significance of the PS-1 defect in HPRT-knockdown cells and because PS-1 is known to regulate Notch-1 signaling by cleaving Notch-1, we investigated the effect of HPRT knockdown on the level of Notch intracellular cleavage domain (NICD) in HPRT-deficient SH-SY5Y cells. Our preliminary results indicate that while wild type and HPRT-deficient SH-SY5Y cells have similar levels of NICD in the undifferentiated state, retinoic acid-differentiated HPRT-deficient cells demonstrate a marked (2.4 fold) decrease in NICD (Kang et al., unpublished results). We therefore infer from these results that HPRT may therefore participate in regulating both APP and Notch expression through its effect on PS-1 expression.
In the current case of HPRT effects on signaling pathways, either one of these dysregulated pathways alone; i.e., Wnt signaling and PS-1 expression, may reasonably play a role in at least part of the HPRT deficiency neurological phenotype through dysregulated expression of transcription factors or through mechanisms related to disturbed PS-1 function and gamma-secretase activity. In contrast, the individual roles of bcatenin and PS-1 in neurodegenerative disease together with their coordinate dysregulation by knockdown of the housekeeping gene HPRT suggest that aberrant Wnt and PS-1 pathways may interact and cooperate to produce pathogenic components that individually are subtle but that in the aggregate produce powerfully disrupting effects on vital neural pathways, such as those seen in human HPRT deficiency. It may therefore not be a surprise that the pathways regulating DA neuron development in HPRT deficiency overlap with pathogenic mechanisms found in severe neurodegenerative diseases such as Alzheimer's and Parkinson's diseases. For instance, some defects in the generation and function of DA neurons are obviously part of the Parkinson's disease phenotype but have also been reported in Alzheimer's disease. Loss of dopamine D2 receptors has been reported to play both motor and cognitive roles in Alzheimer's disease [36] and Ldopa administration has been reported to produce a marked reversal of the abnormalities of motor cortical excitability such as decreased intra-cortical inhibition (ICI) in Alzheimer's disease patients [37].
Dysfunctional PS-1 has been reported to negatively regulate the stability of b-catenin for regulating neuronal apoptosis or for preventing neuronal differentiation [38,39]. This finding is consistent with our finding that both non-differentiated and differentiated HPRT-knockdown cells missing the 23 kDa CTF show lower levels of b-catenin than control cells, possibly because the b-catenin binding site on PS-1 is known to be on the Cterminal portion of PS-1 (residues 322-450) [40]. The interaction between b-catenin and PS-1 may suggest coordinate regulation through formation of multifunctional complexes of these two and possibly other components. Indeed, in our present study, we have confirmed the reported interaction between PS-1 and b-catenin, as demonstrated by co-immunoprecipitation with PS-1 and b-catenin antibodies (Figure 4). The role played by dysfunctional PS-1 in determining the stability and down-regulation of Wnt/b-catenin signaling possibly through impaired complex formation between b-catenin and truncated PS-1 merits further study.
Overall, we interpret our results to indicate that the molecular and neurological phenotypes induced by HPRT deficiency includes wide-spread pleitropic aberrations in components of disparate major signaling pathways, especially dysregulated neurodevelopmental processes associated with Wnt and PS-1 signaling pathways. The possible pathogenic role of individual components of the Wnt/b-catenin and presenilin functions has not previously been demonstrated experimentally to be associated with the development of LND disease. The present study may therefore point toward some common pathways among several quite different neurodevelopmental and neurodegenerative disease and suggest productive new areas of research to understand the neurological defects in both these and other neurodegenerativeand neurodevelopmental-disease.
Cells and in vitro neuronal differentiation
Human dermal fibroblast-adult (HDF-a) cells were obtained from ScienCell Research Laboratories (ScienCell Research Laboratories, Carlsbad, CA) and were grown to approximately 90% confluence in Dulbecco's modified Eagle's medium (DMEM) high-glucose medium supplemented with 10% (v/v) fetal bovine serum, 1% (v/v) penicillin/streptomycin (Invitrogen, Carlsbad, CA) and 0.2% (v/v) normocin (Invivogen, San Diego, CA) in 5% CO 2 atmosphere. Medium was renewed every 2-3 days. Human SH-SY5Y cells were obtained from ATTC and were maintained as described above for HDF-a cells. Human SH-SY5Y cells maintained in culture as described above were differentiated with retinoic acid as previously described [41]. Two LND patient fibroblast cells, WM and LW, were provided by Dr. H.A. Jinnah. Department of Human Genetics and Pediatrics, Emory University.
HPRT-knockdown and control vectors
The small hairpin oligonucleotides for HPRT knockdown were selected as previously described [14]. The selected HPRT hairpin oligonucleotides for this study are as follows: The sequence of the sense element is 59-ATCCGTGGCCATCTGCTTAGTAGATT-CAAGAGATCTACTAAGCAGATGGCCATTTTTTG-39 and the sequence of anti-sense element is 59-ATTTCAAAAAATG-GCCATCTGCTTAGTAGATCTCTTGAATCTACTAAGCA-GATGGCCACG-39. In both sequences, the hairpin loop in a sequence is underlined. The selected hairpin oligonucleotides were recombined with RNAi-Ready pSIREN Moloney leukemia virusbased retrovirus vectors that express shRNA from the human U6 promoter (Clontech Laboratories, Mountain View, CA), and the pSIREN vectors and plasmid encoding the glycoprotein of vesicular stomatitis virus (pCMV-G) were co-transfected into the packaging cell line GP-293 [14]. The viral titer, production and isolation from packaging cell line GP-293 were carried out in HT-1080 cells as previously described [14]. As a control shRNA, we used an RNAi-Ready pSIREN-RetroQ vector (Clontech Laboratories) expressing a shRNA retrovirus vector targeted against luciferase.
Cell transduction and selection of HPRT-deficient cells
Control and knockdown cells were infected at a multiplicity of infection (MOI) of approximately 1 with anti-luciferase or anti-HPRT vectors retroviral vectors. Infected cells were grown for 10 days in complete DMEM medium containing 3 ug/ml of puromycin. Bulk cultures were re-plated and maintained in DMEM without puromycin selection for an additional 7 days, after which cells were examined for HPRT expression by transfer to DMEM containing 250 uM 6-thioguanine. The cells infected with the HPRT shRNA retrovirus vector grew and expanded normally in 6-thioguanine, whereas the control cells infected with luciferase shRNA were unable to grow in 6-thioguanine (data not shown).
RNA purification and quantitative PCR analysis
Total RNAs from HPRT-knockdown and control cells were purified by RNeasy Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer's instruction. To synthesize complementary DNAs from purified RNA, 2 ug of total RNA were applied to the mixture of TaqMan reverse transcription reagents (Applied Biosystems, Foster City, CA) consisting of 5.5 mM MgCl 2 , 500 uM dNTP mixture, 2.5 uM random hexamer, 0.4 U/ml RNase inhibitor and 1.25 U/ml reverse transcriptase in 50 ul of TaqMan RT buffer. Thermal cycling for reverse transcription included activation at 25uC for 10 minutes, reverse transcription at 48uC for 30 minutes and inactivation at 95uC for 5 minutes. For qPCR analysis, the reagents required for the PCR reaction were supplied from a Qiagen QuantiTect SYBR Green PCR Kit (Qiagen) and the PCR reaction was carried out using the Opticon2 System DNA Engine (BioRad, Hercules, CA). Primer sequences for these reactions are summarized in Table S2. For normalization of qPCR, primers specific for the housekeeping gene TATA box binding protein as well as the glyceraldehyde-3-phosphate dehydrogenase were used as a standardization control.
Immunoblotting
For protein extraction, HPRT-knockdown and control cells were lysed by Mammalian Cell Lysis Kit (Sigma-Aldrich, St. Louis, MO) according to the manufacturer's instructions. For the subcellular fractionation of proteins, HPRT-knockdown and control SH-SY5Y cells and LND patient fibroblast cells were lysed and fractionated by Qproteome Cell Compartment Kit (Qiagen) according to the manufacturer's instructions. Extracted proteins were quantified by the Bradford protein assay. Twenty or Ten microgram protein samples were separated by reducing-Tricine-SDS-PAGE and the separated protein bands were transferred onto polyvinylidene fluoride membrane (Millipore, Billerica, MA) using Mini Trans-Blot Electrophoretic Transfer Cell (BioRad). Blotted polyvinylidene fluoride membranes were blocked by blocking solution containing 1% bovine serum albumin, 20 mM of Tris, 137 mM of NaCl and 0.1% of Tween-20 (Sigma-Aldrich) for one hour at room temperature. Immunodetection of primary antibodies was carried out overnight in a 4uC chamber and signal amplification using horseradish peroxidase-conjugated secondary antibody was performed at room temperature for one hour. As the chemiluminescence reagent, SuperSignal West Pico (Thermo Scientific, Rockford, IL) was treated and the x-ray film was developed by SRX101A film processor (Konica Minolta, Motosu-shi, Gifuken). The monoclonal antibody for HPRT (Santa Cruz Biotechnology, Santa Cruz, CA) was used for immunostaining, using goat anti-mouse IgG-HRP (Thermo Scientific) as the secondary antibody. Antibodies for b-catenin, phospho-b-catenin and full length and C-terminal fragment of presenilin-1 were obtained from Cell Signaling Technologies (Danvers, MA) and the antibody for N-terminal fragment of presenilin-1 was purchased from Abcam (Cambridge, MA). Mouse monoclonal antibody to beta-actin (Abcam) and antibodies to GAPDH and Lamin A/C (Cell Signaling Technology) were used as the loading controls. All primary and secondary antibodies were diluted in blocking solution according to manufacturer's instructions. For the co-immunoprecipitation, cells were lysed by 1X cell lysis buffer containing 20 mM Tris (pH 7.5), 150 mM NaCl, 1 mM EDTA, 1 mM EGTA, 1% Triton X-100, 2.5 mM sodium pyrophosphate, 1 mM b-glycerophosphate, 1 mM Na 3 VO 4 and 1X protease inhibitor cocktail (Roche, Basel, Switzerland). Before immunoprecipitation, the 500 ul of lysate were pre-cleaned by incubation with the 50 ul of protein A agarose beads (50% of bead slurry) at 4uC for 60 minutes. After pre-cleaning, the 500 ul of cell lysate was incubated with anti-b-catenin or anti-presenilin 1 primary antibody at 4uC for overnight. For precipitation of proteins coupled with primary antibody, the 50 ul of protein A agarose beads (50% of bead slurry) was added and incubated with gentle rocking for three hours at 4uC. The precipitate was washed with 500 ul of 1X cell lysis buffer five times and the proteins coupled with protein A agarose beads were eluted by reducing sample buffer for Western-blotting.
HPRT enzymatic assay
HPRT assays were performed as described [42]. To measure the activity of HPRT, HPRT-knockdown and control cells were treated with lysis buffer (10 mM Tris-HCl, pH 7.4 and 12.5 mM MgCl 2 ) and sonicated on ice six times for 10 seconds each interval of 30 seconds. Aliquots of 50 ug of extracted proteins were added to the assay mixture consisting of 2 mM hypoxanthine, 2 mM adenine (Sigma-Aldrich), 0.12 mM [8-14 C] hypoxanthine, 0.12 mM [8-14 C] adenine (Moravek Biochemicals, Brea, CA), 0.24 mM AOPCP and 5 mM PRPP (Sigma-Aldrich), and the assay mixtures were further incubated at 37uC for 2 hr. The enzyme reactions were stopped by adding trichloroacetic acid (Sigma-Aldrich) to a final concentration of 2.5% and the supernatants were collected after centrifugation at 10,000 xg for 5 minutes. Aliquots of 3 ul of carrier consisting of 19.1 mM AMP and IMP (1:1 ratio) from Sigma-Aldrich were spotted onto TLC PEI Cellulose F (EMD Chemicals, Gibbstown, NJ) plates, and 3 ul aliquots of the sample supernatant were applied to the plates. Ascending chromatography was carried with 5% Na 2 HPO 4 (Sigma-Aldrich) as a solvent and the chromatogram was air-dried. Chromatograms were exposed for 24-48 hours to x-ray film at 270uC. Portions of the plates corresponding to the positions of IMP and AMP were scraped, transferred to scintillation vials and radioactivity was measured using a Beckman LS 6500 liquid scintillating counter (Beckman Coulter, Brea, CA).
Microarray and data analysis
All reported data are MIAME compliant and that the raw data have been deposited in NCBI's Gene Expression Omnibus (Edgar et al., 2002) and are accessible through GEO Series accession number GSE24345 (http://www.ncbi.nlm.nih.gov/geo/query/ acc.cgi?acc=GSE24345).a MIAME compliant database (E.g. ArrayExpress, GEO), as detailed on the MGED Society website. For microarray analysis, RNAs from triplicate independent cultures of vector-infected HPRT knockdown and control fibroblasts were prepared separately, pooled and used to prepare cRNA and finally subjected to microarray transcriptional analysis in triplicate. The integrity of total RNA from HPRT knockdown and control cells was confirmed by bioanalyzer (Agilent Technologies, Santa Clara, CA). The quality of total RNA samples from HPRT knockdown and control cells was assessed by 2100 bioanalyzer [43] before application to microarray analysis. We determined that the RNA integrity number (RIN) [44], (maximal degradation = 1; maximal molecular integrity = 10) was 10 for both the normal and knockdown cells (data not shown), indicating that isolated RNA samples were of sufficiently high quality to permit subsequent preparation of cDNA for microarray analysis. Microarray transcriptional analysis was performed in triplicate using the HumanWG-6 v3.0 Expression BeadChip system (Illumina, San Diego, CA). All reagents were obtained from HumanWG-6 v.3 Expression BeadChip Kit (Illumina) and all experimental processes were carried out according to manufacturer's instruction (Illumina). After scanning of hybridized BeadChip, quantitation of slide images were performed using Illumina's BeadArray software and the raw data were normalized by Loess normalization method [45], and then the normalized raw data in BeadStudio was exported to GeneSpring GX 10.0.2 (Agilent, Santa Clara, CA). For identification of genes significantly altered in knockdown cell compared with the control normal gene set, total detected entities were filtered by signal intensity value (upper cut-off 100 th and lower cut-off 20 th percentile) and error (coefficient of variation: CV ,50.0 percent) to remove very low signal entities and to select reproducible signal values of entities among the replicated experiments, respectively. In statistical analysis, t-test unpaired (p,0.05) was applied and all significant changes above 2-fold were selected. Signals were selected if they were above microarray background (detection p-value ,0.05) in either all six experiments or in at least three knockdown or control experiments. Analysis of GO, GSEA and signaling pathway [46][47][48] was carried out using GeneSpring GX 10.0.2 (Agilent) and the PANTHER Classification System (http://www.pantherdb.org/). In the analysis of signaling pathways using GeneSpring GX 10.0.2 (Agilent), a total of 140 cellular pathways were identified. For GSEA analysis, we used the false discovery rate (FDR) of ,0.4 and p#0.1. | 8,727.6 | 2011-01-28T00:00:00.000 | [
"Biology",
"Medicine"
] |
Creating an Entrepreneurial Culture Among Students Through Entrepreneurship Development Programmes ( EDP ) Natanya Meyer
Entrepreneurship is the cornerstone of economic growth and financial independence. This paper reports on the perception of students regarding entrepreneurship. Unemployment, especially for the youth, is a reality not only in South Africa but in many other countries globally. Creating a culture and awareness of students to become entrepreneurs is believed to be one solution to reduce unemployment. Various literature sources were examined during the compilation of a questionnaire, comprising 53 questions regarding issues relating to entrepreneurship, unemployment and job possibilities. Approximately 400 students from the Faculty of Economic Sciences and IT at the Vaal campus of North West University in Gauteng province, South Africa formed part of the survey and 293 completed questionnaires were received back. The questionnaire was pre-tested and piloted before the main survey. Data were analysed using z-tests and p-tests, and indicate that the implementation of entrepreneurship development programmes (EDPs) for students would be well supported. The students have a realistic view of the risk of unemployment and show a favourable attitude to entrepreneurship, which is seen as a possible safeguard against unemployment. It is believed that by introducing EDPs to students as early as their first year of study will create a culture to become self-employed. This could, in turn, improve economic development and help reduce youth unemployment in developing countries.
Introduction
An old philosophy regarding entrepreneurship states that entrepreneurs are born, and not created.This statement could be partially correct; however, with proper training and the creation of an entrepreneurial culture, entrepreneurship development programmes (EDPs) can create and train entrepreneurs.Entrepreneurship education does not only build and shape better and stronger entrepreneurs, it also develops attitudes, creates values and motivation and helps students and people participating in these programmes to acquire skills they can use in any job or task in which they might be involved (Bhat & Khan, 2014).
Job creation has been a global challenge for many years, including in South Africa (Meyer, 2014a).The unemployment rates are high in countries such as Spain (25%), Greece (24%), and Lesotho (25%), and relatively high in countries such as Italy (11%), Portugal (16%), and Ireland (15%) (CIA, 2013).A growing global problem is that of youth unemployment rates, which affect those between the ages of 15 and 24 in particular.Compared to national unemployment rates, the results are quite shocking.According to the 2012 World Fact Book, Spain has a youth unemployment rate of 46 percent, Greece 44 percent, Portugal 30 percent, Ireland 29.4 percent, Italy 29.1 percent and rising to 35 percent in 2012 (CIA, 2012).Unemployment is part of the reality of many South Africans, with the official unemployment rate of the country at 25.2 percent, and a youth unemployment rate of more than 50 percent (StatsSA, 2013a).Globally, the average unemployment rate is approximately 9.2 percent (CIA, 2013), ranking South Africa's rate 169 th out of 202 countries.This looming reality creates fear among young graduates, as they too could become a part of these statistics.Due to this alarming fact, it should be a component of the aim of any university to empower graduates by fostering entrepreneurship through education.
There are many causes of youth unemployment, but some of the main causes are rural-urban migration, they make up a large proportion of the population, low education levels in rural areas, poverty, corruption, strict labour laws, economic conditions and the influx of migrants.
Rural-urban migration can be explained by means of push-pull factors.Many young people move from rural areas into more formal and urbanised areas looking for better living conditions resulting in a huge influx, and in many cases, creating an oversupply of employment in these urbanised areas (Stutz & Warf, 2012).
In particular, the people who fall within the youth category of between 15 and 24 constitute to a large portion of the population.In the mid-year report issued by Statistics South Africa (StatsSA, 2011) this category consisted of 10 075 823 of the total South African population of 50 586 757.This is equivalent to 20 percent of the country's population.During 2013, this figure rose slightly to 10 203 329 people between the ages of 15 and 24, which is equivalent to 19 percent of the population during that year, as the total population grew by approximately 2 500 000 people.(StatsSA, 2013b).Low education levels in rural areas can be considered another factor, as many rural schools lack proper teachers and resources.This adds pressure on students who, in many cases, fall behind.These students will seldom have the skills and knowledge to become an entrepreneurial business owner.In a recent study by the Southern and Eastern Africa Consortium for Monitoring Educational Quality (SACMEQ) it was found that South Africa achieved a low ranking of 10 th out of 15 countries in Africa regarding reading, mathematics and health (Wilkinson, 2014).
Poverty is still a global problem, and the reduction thereof is one of the main priorities of the Millennium Development Goals (MDG) as declared by the United Nations in 2000 (UN, 2003).A total of 2.6 billion people, or approximately 40 percent of the global population, live below the global poverty line of US$2.00/day, and most of these people live in poverty and lack services and basic needs (Meyer, 2014b).Not having basic needs and services, and being poor, could definitely have an impact on employment rates, as it impacts on the availability of resources in order to find a job or start a business.Poverty also influences education levels negatively.Taylor, Van der Berg and Burger (2014) point out that socio-economic factors such as poverty have a negative impact on education, and in turn can lead to the loss of employment and income.
Corruption can also be blamed as a possible cause for unemployment.In countries such as Nigeria and South Africa, where corruption levels are very high, it is believed that money aimed at facilitating entrepreneurship and other small business development is wrongfully stolen from these projects, affecting the outcome negatively (Uddin & Uddin, 2013;Odusegun, 2014).
The International Monetary Fund (IMF) criticised South Africa regarding the country's strict labour regulations, claiming that these laws intensify unemployment and hamper economic growth (Anderson, 2005).When strong regulation and strict labour laws are in place within a country, companies are likely to employ a minimum number of workers.There are various reasons for this, which include the implementation of minimum wages as many unemployed people would have been prepared to work for a lower wage but are prohibited by labour laws, and the tedious dismissal procedure, because if a company wishes to dismiss an employee, the process is so time consuming and costly that they would rather not appoint someone in that position.
Another cause for unemployment is economic conditions.Bad economic conditions lead to small businesses closing down and larger companies not hiring any, or only a very few, new staff members.The influx of migrant workers from neighbouring countries has an adverse effect on unemployment rates.These migrant workers are often willing to work for a much lower wage than the citizens of the country are, and this could mean that companies prefer employing these migrants, rather than the currently unemployed citizens.
Naude, Gries, Wood and Meintjies (2008) highlight the importance of entrepreneurs as a channel for spillovers associated with agglomerations and indicated that entrepreneurship is an important determinant of regional economic performance.Entrepreneurship can impact the economic performance at three different levels, namely individual, firm and societal levels (Wennekers, Uhulaner & Thurik, 2002).On an individual level, entrepreneurs affect their own economic performance as they earn a salary, which they use to support themselves and their families.On a firm level, the entrepreneurial businesses pay taxes and spend money on production and operating costs.On a societal level, the entrepreneurs employ people who would have been unemployed.These employed people earn a salary to spend, and pay taxes, impacting positively once again on the economic performance of a country or region.
The role of entrepreneurship as a contributing factor for economic development has many times before been highlighted in the literature (Awashti & Sebastian, 1996;Athayde, 2012).Entrepreneurship should be emphasised even more in developing countries such as South Africa, were unemployment is an ever-increasing problem.Entrepreneurship is also seen as part of a knowledge economy, meaning that entrepreneurship education can be extremely beneficial for the creation of new knowledge-based, graduate-led ventures (Kirby & Humayun, 2013).
Entrepreneurship offers opportunities for people to achieve financial freedom and independence (Raguž & Mati , 2011).The role of entrepreneurship in local communities and societies has increased in recent years and there is a need to include entrepreneurship education as a topic within universities (Venesaar, Ling & Voolaid, 2011).According to Ekpoh and Edet (2011), there is a positive link between entrepreneurship education and entrepreneurial attitudes, and entrepreneurship education could improve the understanding and awareness of entrepreneurial abilities among students (Bagheri & Pihie, 2011).Therefore, it could have positive impacts if entrepreneurial educational courses are incorporated as part of university student programmes (Tkachev & Kolvereid, 1999).This can be achieved by introducing EDPs, also
MCSER Publishing, Rome-Italy
Vol 5 No 13 June 2014 known as business promotion programmes or young enterprises programs (this type of programme might have different names in different countries or institutions), as early as during the first year of a student's degree.An EDP can be defined as a learning programme with the main objectives being the development and enhancement of an entrepreneurial culture or mindset, and to train enterprise creators (Allan Grey Orbis Foundation, 2013;Chowdhary & Prakash, 2010).
The concept of an EDP is not new and was experimented with in the late sixties.These EDPs aimed at training selected entrepreneur candidates on how to set up a small business, how to manage it, and how to make profit.These early EPDs were established in Gujarat, a state on the north-west coast of India.It was a joint initiative by the Gujarat Investment Corporation and other state agencies (Awashti & Sebastian, 1996).Other successful EDPs, or business promotion programmes, were initialised in Germany, France and Great Britain.The governments of the respective countries instituted the programmes in order to reduce the unemployment rates and increase self-employment simultaneously.However, they were not aimed at university students, but at the unemployed population (Remeikien & Startien , 2013).
The benefits of a successful EDP far outweigh the challenges.Some of the benefits include 1) positively affecting the employment rate within developing countries, 2) reducing the gender, age group as well as wage differences in some of the labour markets, 3) human capital increase in the form of knowledge, qualification, education, working abilities and experience, and 4) establishing of useful business contacts and other successful entrepreneurs (Remeikien & Startien , 2013).
Although Awashti and Sebastian (1996) are of opinion that the benefits of an EDP will outweigh the challenges, the reality of various problems pertaining to EDPs still exists.Some of these challenges include 1) selections of the right candidates for the programme, 2) lack of seriousness and commitment from the organisations conducting the EDP, 3) lack of motivation from the training authorities to the participating candidates within the programme, 4) a non-conductive environment, 5) the non-participative attitudes of supporting role players such as established self-employed business owners and larger corporations (Chowdhary & Prakash, 2010), 6) monitoring of the EDP to ascertain the success rate, 7) following up on the success rate of the EDP after completion (Awashti & Sebastian, 1996), and 8) funding the EDP.
An EDP should have very clear objectives from the start to create a measurable expectancy from the presenter or promoter of the programme as well as the trainee.Not all EDPs will have the same objectives; however, some goals should form the basis of a well planned EDP.These include 1) to increase the supply of entrepreneurs who start new businesses, 2) to reduce to rate of unemployment by creating self-employment as well as employment of others, 3) promoting first generation entrepreneurs by diversifying the base of enterprise ownership, 4) improving the quality of entrepreneurs (Awashti & Sebastian, 1996), 5) stimulation and identification of entrepreneurial talent, skills and drive, and 6) devising attitudes towards change (Garavan & O'Cinneide, 1994).Support for the students who completed the EDP from the university or institution is also very important.This can be in the form of incubation programmes assisting these students with their start-up ventures.
The aim of this study was to determine the students' perceptions towards unemployment, motivation to become self-employed, and attitude towards the possibility of starting an entrepreneurship development programme on the campus as an addition to their degree or course they have enrolled for.
Sample
The sample for this study comprised first-, second-, third-year and postgraduate students from a traditional university, namely the Vaal campus of the North West University, situated in Gauteng province, South Africa.This campus has already created a favourable attitude towards creating entrepreneurship awareness among students and local businesses with the introduction of the Enterprise Development Centre (EDC) during 2011; further investigation into the possibility of offering students EDP options are currently being explored.
Sampling method
A non-probability convenience sample of 400 students was drawn from the campus, which was limited to students from the Faculty of Economic Sciences and IT, as the study aimed at determining their specific attitudes towards the implementation of an EDP.Lecturers in the faculty were requested to distribute the questionnaire in their classes.All participants completed the questionnaire voluntary and no incentives were provided to encourage participation.Full confidentiality was assured to the participants as no names or contact details were disclosed.ISSN 2039-9340 (print) Mediterranean Journal of Social Sciences MCSER Publishing, Rome-Italy Vol 5 No 13 June 2014
Research instrument
A descriptive research design, using a survey self-administered questionnaire, was employed for the study.Following a review of the literature, items were generated to determine students' perceptions towards finding a job, entrepreneurship and unemployment.The questionnaire included a section requesting demographical information.The questionnaire was pre-tested using the debriefing approach on three students to ascertain if all items were understood in the manner that was intended (Synodinos, 2003).Thereafter, it was pilot-tested on a sample of 35 students registered at the same university.These students did not form part of the final study.The 53 scaled responses were measured by using a sixpoint Likert scale, which ranged from strongly disagree (1) to strongly agree (6).The Likert scale was selected due to its popularity and as it is one of the most commonly used non-comparative scaling techniques (Bertram, 2006).
Results and Discussion
During data collection, 400 questionnaires were distributed over a timeframe of a week.Of the 400 questionnaires distributed, 293 completed questionnaires were returned, which equates to a 73.25 percent response rate.Respondents were mainly female (56.2%), and predominantly between the ages of 19 and 21.Even though most of the respondents indicated that they were from Gauteng (69.6%), all nine provinces were represented.The sample comprised predominantly African/black students (55.3%) and white students (39.2%).The majority of the participants were in their second (36.6%) and first year (34.6%) of study.A few other important questions were asked in this section and they included the type of living environment whilst growing up, parent self-employment, and awareness of EDPs.Of the respondents, 35.2 percent indicated that at least one of their parents was self-employed.Concerning living environment, 18.2 percent indicated that they grew up in a rural or informal settlement, and 81.8 percent indicated that they grew up in an urban or formal areas.Interestingly, 28.5 percent reported that they had never heard of an EDP.Table 1 The paper reports on 12 mutually exclusive items that formed part of the wider study.Table 2 presents the descriptive statistics computed from the collected data.As is evident from Table 2, means above three (indicating agreement) were computed on all of the items.With the current youth unemployment rate in South Africa at around 50 percent, it is not surprising that the two highest means reported from the survey was regarding unemployment.The students general perception was that by improving entrepreneurship, the unemployment rate could decrease.Students generally would have liked assistance that is more practical, additional workshops, training regarding entrepreneurship, as well as to participate in an EDP while completing their degree.
In order to determine whether these means are statistically significant, a one-tailed z-test was performed, whereby the expected mean was set at X>3 and the significance level at a=0.05.Table 3 reports on the calculated z-values and pvalues.As can be seen from Table 3, the computed mean responses to each of the items are statistically significant (p<0.05).This infers that the respondents realise that they might face unemployment or not find a job one day (p=0.000<0.05),would like to become self-employed and form part of an EDP (p=0.000<0.05),believe that unemployment is a problem in South Africa (p=0.000<0.05),and would have liked more practical experience on how to become entrepreneurs during the duration of their degree (p=0.000<0.05).These findings also indicate that students have a positive attitude towards entrepreneurship (p=0.000<0.05).
In theory, entrepreneurship programmes should improve the output of sustainable and successful business owners, however, the success of these programmes is difficult to measure.Another possible problem is the attitudes of students towards these programmes when they are introduced into an existing university degree or programme.The findings of this study suggest that implementing an EDP for students registered at the University's Faculty of Economic Sciences and IT would be supported well.The evidence in the sample indicates that students have a realistic view of the risk of unemployment and have a favourable attitude to entrepreneurship in general, as well as a possible safeguard against unemployment.In order to be of value, such a programme should focus on providing practical entrepreneurial skills that will equip graduates to compete effectively in day-to-day business.As indicated by Ligthelm (2010), support should also be provided for business start-ups after they have established a business, as many of them will not remain sustainable without assistance during the first few years of their existence.This can be done though advice centres within the incubators situated on the university campus where the initial EDPs were presented.Entrepreneurship development should be a fundamental topic raised not only at government level, but also at primary, secondary and tertiary level.Fostering and creating a culture for entrepreneurship in the early stages of development and learning will contribute to the successful creation of self-employed ventures, which in turn could reduce the unemployment and poverty rate to an extent.
A study done in Nigeria also indicated that youth unemployment is a huge problem and the every year thousands of graduates leave universities and remain unemployed for long periods of time.In their study, the authors indicate how many of these youth then turn to cybercrime and other forms of intelligent crimes.If they could have received entrepreneurial training or had assistance, they could have converted their knowledge into a feasible IT business instead of a criminal act (Uddin & Uddin, 2013).They further state that all levels of government should strive to encourage the youth to think rationally towards job creation.Another study done in Nigeria found that unemployment is related negatively to entrepreneurship development, and that the higher the unemployment rate, the lower the level of entrepreneurial development is within that country.Thus, increasing the entrepreneurial support and activities could decrease the unemployment rate (Oladele, Akeke & Oladunjoye, 2011).Solomon, Duffy & Tarabishy (2002) also noted these finding.
Recommendation and Conclusion
Unemployment is a worldwide problem, but with proper strategies and planning, including promotion of entrepreneurship development training, it could possibly be reduced.A major problem exists with youth unemployment in many countries.Therefore, it of the utmost importance to implement any strategy or programme that may provide a foundation for young graduates to utilise the information they were taught during the completion of their degrees, by means of own venture start-up, in the likely event of them not finding employment.The following is recommended: • Government should create an enabling environment for entrepreneurs.This can be done by implementing EDPs and similar programmes as a compulsory module or course during primary, secondary and university level.• Local businesses should become involved with these projects, especially at university level, where they can assist in training and practical workshops.• Internship programmes should be created by larger business who can then appoint graduates from these programmes.• Relaxation of rules and regulations regarding the process of starting a business as strict laws and timeconsuming processes makes the task of starting a business very difficult.• Government needs to implement strategies that promote small business start-ups, especially for the youth section of the population.
• Mentorship programmes needs to be created by business and other institutions.
• Business incubators need to be more active in their creation of entrepreneurs and assisting them with their possible start-ups.• Support to new business should be continued for a period after starting up, as many businesses fail within their first years of operation.• Technical and practical skills also need to be developed within these programmes.Implementing all or some of these recommendations within South Africa could have a huge impact on the current youth employment rate.Based on the outcome of this study and the positive attitude of the students towards entrepreneurship and self-employment it is believed that a well-managed and well-designed EDP would be successful and accepted among students registered at the Faculty of Economic Sciences and IT at the North West University (Vaal Campus).
Table 1 :
outlines a description of the sample.Sample description
Table 3 :
Student perceptions towards entrepreneurship Traditional EDPs are presented normally as a short course or programme to individuals entering into the programme.As seen in the EDPs implemented in India, Germany, France and Great Britain, these programmes were mostly initiated by the government or by private institutions.Similar programmes presented in South Africa include the Community Entrepreneurship Development Programme (CEDP), managed by the Crossover Transformation Group (2013).Their main objective is to provide mentorship and assistance to people living in previously disadvantage areas regarding business development.The Western Cape government has three initiatives linked to the National Development Plan.These include 1) the Enterprise Dynamics Programme for grade 1 to 12 students, 2) Mini-Enterprise Programme for senior secondary school learners, and 3) the Business Establishment and Sustainability Programme (BESP) aimed at providing training for people with no or limited formal education.Private institutions, in the form of internships, do other forms of training programmes.As can be seen from the listed programmes, they are not aimed at providing additional and practical training for graduates during the completion of their degrees. | 5,168.8 | 2014-08-06T00:00:00.000 | [
"Business",
"Education",
"Economics"
] |
The quadrate-metapterygoid fenestra of otophysan fishes, its development and homology
We compare the ontogeny of the hyopalatine arch in representatives of the Otophysi to shed light on the homology of the so-called quadrate-metapterygoid fenestra, QMF. Described initially as a character of characiforms (tetras and allies), presence of a QMF has also been reported for cobitid loaches and a handful of cyprinids among cypriniforms, as well as for a few clupeoids. In characiforms the QMF is either already present as an opening in the palatoquadrate cartilage in the earliest developmental stages we studied
Introduction
The hyopalatine arch comprises the hyomandibular, symplectic, quadrate, metapterygoid, ecto-and endopterygoid, and palatine (dermo-and or autopalatine), and has been modified in many different, sometimes bizarre, ways during the evolution of Teleostei.Modifications include extreme changes in size and shape or loss of bones that have sometimes made it difficult to assess homology of the components.One prominent evolutionary change in the structure of the hyopalatine arch is the formation of a conspicuous opening between two bones, the so-called quadrate-metapterygoid fenestra (QMF from here on).The presence of this unusu-al opening was first illustrated (Fig. 1) in Serrasalmus rhombeus (Linnaeus) by Rosenthal (1816), and described by Sagemehl (1885) for representatives of the South American characiform families Lebiasinidae, Characidae and Erythrinidae, and the African Alestidae, Hepsetidae, and Citharinidae, and illustrated in detail for Erythrinus, Hydrocynus and Citharinus.Presence of a QMF has since been confirmed for a large number of African, as well as New World characiforms (e.g., Weitzman 1962Weitzman , 1964;;Daget 1962Daget , 1963;;Roberts 1966Roberts , 1969Roberts , 1974;;Winterbottom 1980;Brewster 1986;Vari 1989Vari , 1995;;Zanata and Vari 2005;Mattox and Toledo-Piza 2012).
Few data have been published on how the fenestra develops in the different groups, and so far only ontogenetic information from hepsetid and characid characiforms (Bertmar 1959;Vandewalle et al. 2005;Walter 2013;Mattox et al. 2014: Marinho 2022) and opsariichthyine cypriniforms (Arratia 1992) has been available.
Access to several developmental series of key taxa (different Ostariophysi and outgroups) has enabled us to revisit the issue of QMF homology and describe its development in each taxon in detail.
Specimen preparation and examination
Specimens studied and illustrated here were cleared and double stained (c&s) following Taylor and Van Dyke (1985).Specimens were dissected with fine-tipped scissors and tungsten needles.Dissected hyopalatine arches were then photographed under a Zeiss Discovery V20 at different depths of focus.Deep-focus images were then produced in Helicon Focus 7 or Zeiss Zen software.Images were cleaned of dust specks in Adobe Photoshop 2021 and species ontogeny figures were assembled in Adobe Illustrator 2021.Lengths are given as notochord length (NL) or standard length (SL).The illustrated material is deposited in the following collections: Biodiversity Research and Teaching Collections, Texas A&M University, College Station (TCWC); Senckenberg Naturhistorische Sammlungen Dresden (MTD).
The hyopalatine arch in Elops, Chanos and Denticeps as phylogenetically primitive representatives of teleosts, ostariophysans and clupeomorphs
We begin our comparison with juvenile stages of key taxa that illustrate the primitive condition for our groups of interest.In the 33.8 mm SL specimen of the elopomorph Elops (Fig. 2A) the palatoquadrate cartilage is still quite extensive and spans from pars autopalatina to pars metapterygoidea and separates the quadrate from the metapterygoid.The 18.7 mm SL clupeomorph Denticeps (Fig. 2B) and the 26.5 mm SL gonorynchiform Chanos (Fig. 2C) show a similar condition of quadrate and metapterygoid association as that in Elops.As in Elops, large parts of the palatoquadrate cartilages in these two taxa are still present, with the quadrate and metapterygoid still widely separated by cartilage in the three taxa.No opening exists in the cartilage between the quadrate and metapterygoid.In subsequent development, not shown here, both ossifications of the quadrate and metapterygoid eventually replace the cartilage between the two bones, which then abut each other in a straight suture (e.g., Ridewood 1904 for Elops and Chanos, and Greenwood 1968 for Denticeps).
Candidia barbata
We illustrate four stages: 8.0 mm SL, 12.3 mm SL, 24.7 mm SL, and 79.3 mm SL.Here and in following accounts of the ontogeny, we restrict our description to the parts of the hyopalatine arch that are important for formation of the QMF.In our earliest stage (Fig. 3A), the hyosymplectic and palatoquadrate cartilages are well developed and arranged so that there is only a narrow gap between the pars metapterygoidea and the symplectic process.Ossification of metapterygoid and symplectic has not started, but the quadrate is present and carries a long posteroventral process (sensu Arratia and Schultze 1991) of membrane bone.At 12.3 mm (Fig. 3B), autopalatine, metapterygoid and symplectic are present as perichondral ossifications that cover the respective cartilages.The symplectic has moved away from the palatoquadrate so that a slightly wider gap has been established.At 24.7 mm (Fig. 3C), larger areas of the palatoquadrate and hyosymplectic cartilages ossify, so that in the adult of 79.3 mm (Fig. 3D) only narrow strips of cartilage remain between the bones.The main development in the context of formation of the opening previously referred to as a QMF is the further spatial separation between the symplectic and the quadrate/metapterygoid.The fenestra in Candidia is thus not a true QMF but only a gap that forms between the symplectic and the quadrate and metapterygoid of the palatoquadrate.This quadrate-metapterygoid gap, or QMG, is especially prominent in Opsariichthys, a taxon from where it was initially reported in cyprinids (Fig. 4).
Cobitis dalmatina
We illustrate five developmental stages: 8.6 mm SL, 11.0mm SL, 13.0mm SL, 20.3 mm SL, and 22.5 mm SL (Fig. 5).In our smallest specimen of 8.6 mm SL (Fig. 5A), the hyopalatine arch still has large areas of cartilage and the autopalatine, quadrate and metapterygoid are only weakly ossified as perichondral lamellae.The cartilage between quadrate and metapterygoid is still continuous.At 11.0 mm (Fig. 5B) the strip of cartilage between metapterygoid and quadrate is still present but less prominently stained.The metapterygoid has acquired a small process of membrane bone at its posterodorsal corner.At 13.0 mm SL (Fig. 5C) quadrate and metapterygoid cover larger areas of the cartilage with the posterodorsal membrane bone process of the metapterygoid now reaching an anterior membrane process of the hyomandibular.At 20.3 mm SL (Fig. 5D), the most significant change is the beginning of the formation of the opening between metapterygoid and quadrate.The cartilage between the two bones has been resorbed in the middle of their contact zone and does not stain with Alcian blue in the more peripheral areas of this zone.The bone matrix in the metapterygoid and quadrate that is adjacent to this middle zone shows signs of resorption, which has led to the formation of irregular openings, larger in the metapterygoid.In our largest stage of 22.5 mm SL (Fig. 5E), the opening between metapterygoid and quadrate is fully developed and represented by a circular gap in the middle of these two bones from which both cartilage and bone are fully resorbed.
Ctenolucius hujeta
We illustrate 5 stages, 4.3 mm NL, 7.5 mm SL, 9.6 mm SL, 15.9 mm SL, 38.3 mm SL (Fig. 6).The QMF is already developed in the palatoquadrate cartilage of the smallest stage we studied (Fig. 6A).It is an oval fenestra in the cartilage between the well-chondrified thick pars quadrata and the thinner, more delicate pars metapterygoidea.At 7.5 mm SL (Fig. 6B), the quadrate has ossified around the anteroventral corner of the fenestra and the metapterygoid has started to form at its posterodorsal corner.During further development, the relative size of the fenestra increases, and as the cartilage ossifies further it acquires a round, rather than an oval shape.At 9.6 mm SL (Fig. 6C), metapterygoid and quadrate cover larger areas of the cartilage perichondrally, contributing to the anteroventral and posterordorsal corners of the slightly more elongate QMF.At 15.9 mm SL (Fig. 6D), both bones have started to replace the cartilage of the dorsal and ventral arms around the QMF.The quadrate has developed a triangular flange of membrane bone that fills in the anteroventral corner of the QMF.At 38.3 mm SL (Fig. 6E), a similar flange of membrane bone is present on the metapterygoid filling in the posterodorsal corner of the QMF.Both metapterygoid and quadrate are now more extensive and leave only small areas of cartilage between them.The quadrate forms the bony ventral arm, and the metapterygoid, the bony dorsal arm around the QMF.
Thus the QMF is mostly surrounded by bone, except at its anterodorsal and posteroventral corners, closely resembling the adult condition illustrated by Roberts (1969).
Lebiasina bimaculata
We illustrate 4 stages: 6 mm NL, 10.0 mm SL, 14.7 mm SL, 20.6 mm SL (Fig. 7).In the smallest stage at 6 mm NL (Fig. 7A), the palatoquadrate cartilage is still complete from pars quadrata and pars metapterygoidea to pars autopalatina.At 10 mm SL (Fig. 7B), an oval elongate QMF has formed in the cartilage and the quadrate and metapterygoid have begun to ossify as perichondral lamellae of bone, the latter covering the dorsal arm of the cartilage surrounding the QMF.At 14.7 mm SL (Fig. 7C), both bones are well ossified.The quadrate does not extend along the ventral arm of the cartilage that makes up the ventral margin of the QMF and stops at its base.The metapterygoid, which surrounds almost the entire length of the dorsal arm has formed a ventral flange of membrane bone.This flange covers the dorsal half of the fenestra reducing the opening to a narrow gap between it and the remnants of the palatoquadrate cartilage above the anterior tip of the symplectic.At 20.6 mm SL (Fig. 7D), the metapterygoid membrane bone flange has now covered the QMF almost completely resulting in an even narrower gap between quadrate, metapterygoid and symplectic.
Development of the hyopalatine arch in Pyrrhulina spilota
We illustrate four stages: 5.1 mm NL, 8.8 mm SL, 14.1 mm SL, 33.8 mm SL (Fig. 8).In our smallest stage (Fig. 8A), there is no sign of a QMF.The palatoquadrate cartilage resembles that of the smallest stage of Lebiasina in that it is a complete, continuous cartilage from pars autopalatina anteriorly to pars metapterygoidea posteriorly.At 8.8 mm SL (Fig. 8B), quadrate and metapterygoid have started to ossify perichondrally at the articular head of the jaw articulation and around the posterior process of pars metapterygoidea, respectively, still leaving the posteriormost tip of it cartilaginous and unossified.At 14.1 mm SL (Fig. 8C), the quadrate covers a larger area of pars quadrata and the metapterygoid, also larger, has formed a ventral flange of membrane bone that extends into the narrow space between the metapterygoid and the symplectic.In the largest stage examined (Fig. 8D), the quadrate and metapterygoid have grown considerably and now leave only a narrow strip of cartilage between each other.The metapterygoid has acquired a dorsally directed flange of membrane bone and its ventral flange now closely approaches a similar dorsal flange from the symplectic, separated from it by only a very narrow gap.Unlike in Lebiasina, a QMF never forms in the development of Pyrrhulina based on our material.
Alestopetersius smykalai
We illustrate five stages: 6.0 mm NL, 9.0 mm SL, 12.3 mm SL, 15.6 mm SL (Fig. 9) and 52.1 mm SL (Fig. 10).In our smallest specimen of this African characiform (Fig. 9A), there is a narrow elongate QMF in the otherwise complete palatoquadrate cartilage.At 9 mm SL (Fig. 9B), the area between pars quadrata and pars metapterygoidea has lengthened so that the QMF is now even more elongate.The quadrate has now formed, too.At 12.3 mm SL (Fig. 9C), the quadrate is well-developed.The metapterygoid is also present as a perichondral ossification around the cartilage of the dorsal arm of the QMF, almost extending to the posterodorsal corner of the fenestra.The ventral arm of cartilage that forms the ventral border of the QMF shows some signs of resorption at the posteroventral cor- ner of the QMF where it meets the dorsal arm.Finally, at 15.6 mm SL (Fig. 9D), further resorption of cartilage has almost disconnected the ventral and dorsal arms of the QMF.The metapterygoid has also formed a narrow ventral extension of membrane bone which has narrowed the rather elongate QMF to a narrow gap between it and the cartilage of the ventral arm.In the 52.1 mm SL adult (Fig. 10A), the QMF is well developed and elongate and its posterior and dorsal border is formed by the metapter-ygoid.The quadrate contributes the anterior border of the QMF and the anterior half of its ventral border.A cartilage remnant of the pars quadrata forms another quarter of its ventral border with the posteriormost quarter of the QMF border provided by the symplectic.This results in a wide spatial separation of the quadrate and metapterygoid along the ventral margin of the QMF.The adult 43 mm SL Rhabdalestes septentrionalis, another species of alestid, shows a similar condition (Fig. 10B).
Megalechis thoracata
We illustrate four stages: 5.3 mm SL, 6.5 mm SL, 9.5 mm SL, 22.0 mm SL (Fig. 11).In the earliest stage (Fig. 11A) the hyomandibulo-palatoquadrate cartilage, as is typical in siluriforms, consist of the autogenous pars autopalatina and the confluent pars hyomandibularis and pars quadrata from which pars metapterygoidea projects as a short and tiny anterodorsally projecting cartilage process.At 6.5 mm SL (Fig. 11B) pars metapterygoidea has enlarged, grown anteriorly towards pars autopalatina and developed a wide anterior tip.The quadrate has started to ossify around the articulation with the lower jaw.At 9.5 mm SL (Fig. 11C), the quadrate covers most of the cartilage of pars quadrata perichondrally.The metapterygoid has perichondrally ossified around the pars metapterygoidea and has developed dorsal, ventral and anterior flanges of membrane bone with a dorsally directed pointed process and a sharply tapering, needle like anterior process.At 22.0 mm SL (Fig. 11D), most of the hyomandibulo-palatoquadrate cartilage has been replaced by the autopalatine, metapterygoid, quadrate and hyomandibular, leaving only narrow zones of cartilage between the last three bones.A QMF does not form in development.
Silurus glanis
We illustrate three stages: 11.2 mm SL, 18.1 mm SL, 29.6 mm SL (Fig. 12).Our smallest stage (Fig. 12A) resembles that of Megalechis, but the hyomandibulo-palatoquadrate cartilage is wider.At 18.1 mm SL (Fig. 12B), the quadrate, metapterygoid and the hyomandibular have ossified in the respective parts of the cartilage.As in Megalechis, the metapterygoid forms around the anteriorly-directed pars metapterygoidea and develops membrane bone flanges dorsally, ventrally and anteriorly.At 29.6 mm SL (Fig. 12C), the metapterygoid, quadrate and hyomandibular have enlarged and cover larger parts of the cartilage restricting it to a smaller area between the three bones.A QMF is not developed.
Apteronotus leptorhynchus
We illustrate four stages: 8.6 mm NL, 12.1 mm SL, 20.4 mm SL, 66.0 mm SL (Fig. 13).In our smallest stage (Fig. 13A), pars autopalatina is spatially separated from pars quadrata plus metapterygoidea by a large gap, resembling the condition in the siluriforms described above.This gap remains and is bridged only by the developing endopterygoid.In our 12.1 mm SL stage (Fig. 13B), quadrate and metapterygoid are developed as perichondral lamellae around the cartilage of pars quadrata plus metapterygoidea.Pars autopalatina remains cartilaginous.At 20.4 mm SL (Fig. 13C), ossification of quadrate and metapterygoid has proceeded and in some areas replaced the cartilage via endochondral ossification.A small articular cartilage has developed between pars autopalatina and the maxilla.At 66 mm SL, our largest stage (Fig. 13D), quadrate and metapterygoid have replaced most of the cartilage of pars quadrata plus metapterygoidea and only a narrow cartilage strip separates these two bones.A QMF is not developed at any stage during the development of Apteronotus.
Discussion
The QMF has received considerable attention in the past and different researchers have commented on its distribution among bony fishes and its phylogenetic significance.Early morphologists, like Sagemehl (1885) and Regan (1911), were only concerned with the distribution of this character, and reported the presence of the QMF in characiform and some cypriniform taxa.Greenwood et al. (1966: 385) concluded that the presence of the QMF "is common in characoids…and from this it may be assumed that its presence is primitive for cyprinids and, indeed, all cyprinoids" and thus consequently homologous between these two groups.Gosline (1973: 769) argued against homology of the QMF in characoids and that in cyprinoids and noted that a similar fenestra also occurred in clupeoids, citing Ridewood (1905).In Gosline's (1973) opinion, evolutionary development of the fenestra has a functional reason and "provides increased space for the contracted adductor mandibulae," and he (Gosline 1975: 9) later added that this also facilitated "vertical movement of the hyoid bar internally."Howes (1978: 47) considered the fenestra in cyprinoids to be a "plesiomorph" feature and reported it in Macrochirichthys, Salmostoma and Securicula (as Pseudoxygaster).He offered a different interpretation of the functional role of the QMF, suggesting it may relieve "stresses by directing forces generated in the lower jaw around the perimeter of the pterygoid bones and into the cranium" or act "as a type of hinge which enables the pterygoid bones to undergo lateral rotatory movements."Howes (1979) subsequently reported the presence of the fenestra also in Cabdio (as Aspidoparia) and Luciosoma.Sawada (1982) concluded that the QMF is a derived feature among Cobitoidei and criticized previous functional explanations by Gosline (1973Gosline ( , 1975) ) and Howes (1978) as unsatisfactory.Fink and Fink (1981) used the presence of a QMF as a putative synapomorphy for the teleost taxon Otophysi.In contrast, Arratia's (1992) analyses resolved this character either as synapomorphy of cypriniforms plus characiforms or as independently derived in both groups, a result of her optimization method DELTRAN.She also pointed out that the quadrate-metapterygoid fenestra in Opsariichthys forms after the two bones have ossified, by changes in the shape of their ventral margins and their spatial removal from the symplectic, with which the metapterygoid is sutured in early stages.
In the update and revision of their influential 1981 paper, Fink and Fink (1996) discussed the presence of a quadrate-metapterygoid fenestra as an otophysan synapomorphy, and pointed out that their earlier mention of this structure in some homalopterids, citing Ramaswami (1952b) in support, was in error, but that Ramaswami (1952a) illustrated such a fenestra in Homaloptera amphisquamata in Homaloptera amphisquamata (Weber & de Beuafort).
In a morphology-based phylogenetic study of cypriniform fishes, Conway (2011) coded the QMF (his character 43) as present in all members of Cobitidae and in the characiform outgroup Distichodus antonii Schilthuis.Though Zacco (Z.cf.platypus (Temminck & Schlegel)) was included as a member of the ingroup by Conway (2011), he did not consider its quadrate-metapterygoid opening to be homologous with that of cobitids and characiforms.He therefore coded the QMF as absent for Zacco in his data matrix.Conway's analysis (2011) recovered the QMF as a derived character of the Cobitidae, and one acquired independently of the characiform QMF, thus supporting Sawada's (1982) view.
Our results show that the so-called quadrate-metapterygoid fenestra in cypriniforms and characiforms is a structure with greatly differing development and adult anatomy.
Among cypriniforms, a quadrate-metapterygoid fenestra has been previously reported from the opsariichthyine cyprinids Opsariichthys and Zacco (Regan 1911;Greenwood et al. 1966;Arratia 1992), the danionine cyprinids Salmostoma, Macrochirichthys, Securicula, Cabdio (as Aspidoparia), and Luciosoma (Gosline 1975;Howes 1978Howes , 1979) ) and the cobitids Acantopsis, Cobitis, Iksookimia, Lepdiocephalichthys, Niwaella, Pangio (as Acanthophthalmus), and Sabanejewia (Chranilov 1927;Ramaswami 1953;Sawada 1982;Conway 2011).Of these cypriniform taxa, ontogenetic information on the hyopalatine arch is only available for the cyprinid Opsariichthys from Arratia (1992).She noted that what she called QMF in this taxon develops from an early developmental condition in which metapterygoid and symplectic are initially (at 26.5 mm SL) sutured to each other with no space between them and the quadrate.The fenestra forms in subsequent development by spatial changes in the relative position of the three bones, starting at 30 mm SL and leading to the formation of a large gap between them by 120 mm SL (see Fig. 4).
Our results of the development of the closely related opsariichthyine cyprinid Candidia confirm Arratia's (1992) description.The so called QMF in Candidia forms by a separation of the quadrate and metapterygoid from the symplectic, which results in the formation of a more or less circular gap between the two former bones and the latter (Fig. 3D).As the so called QMF in opsariichtyine cyprinids does not form as a true fenestra (an opening within a skeletal structure) we propose to refer to this type of opening as the quadrate-metapterygoid gap or QMG.The QMG can be distinguished from the QMF in that the former is not limited ventrally by the quadrate and metapterygoid.Judging from the published images and our reinvestigation of the cypriniform material mentioned by Howes (1979) and utilizing this difference, we hypothesize that the opening between quadrate and metapterygoid in those cyprinids (Salmostoma, Macrochirichthys, Securicula, Cabdio, Luciosoma) represents a QMG and not a QMF.The opening in Homaloptera amphisquamata in Fig. 5c in Ramaswami (1952a) which was interpreted by Fink and Fink (1996) as a QMF, is almost certainly also a QMG as it is identical in terms of its relationship to the quadrate, metapterygoid and symplectic as the QMG of Candidia and Opsariichthys.
Our ontogenetic study of the cobitid Cobitis dalmatina demonstrates that the quadrate and metapterygoid develop initially in a typical fashion from a continuous and entire palatoquadrate cartilage (Fig. 5A-C).From ca. 20 mm SL the area in the middle between the two bones begins a process of bone absorption (Fig. 5d), resulting in a large fenestra surrounded by the quadrate and metapter-ygoid.Thus, the opening in Cobitis is a QMF, though it develops in a way dramatically different from the QMF in characiforms.
Since Sagemehl's (1885) seminal monograph on characiform osteology, the presence of a QMF in members of this group is well known (Regan 1911;Weitzman 1962).A number of papers on characiforms have been published (Fink and Fink 1996;Vandewalle et al. 2005;Walter 2013;Mattox et al. 2014: Carvalho andVari 2015;Marinho 2022) that include information on the development of the hyopalatine arch.The most comprehensive is Mattox et al. (2014), who showed that a QMF is already present between pars quadrata and pars metapterygoidea in the unossified palatoquadrate cartilage at very early developmental stages in the basal South American characid (now bryconid) Salminus brasiliensis.Two of the characiforms we studied herein, the African alestid Alestopetersius and the South American ctenoluciid Ctenolucius, are very similar in terms of their development of the hyopalatine arch and QMF.Already early stages show a QMF in the palatoquadrate cartilage, with the fenestra persisting beyond the start of the ossification process into the adult.Alestopetersius differs, however, from Ctenolucius in that the quadrate fails to develop along the ventral arm of the cartilage bordering the QMF, so that this fenestra is not entirely surrounded by the quadrate and metapterygoid, but also by the symplectic in its posteroventral corner (Fig. 10).This condition in adult Alestopetersius, superficially resembles the QMG in Candidia and Opsariichthys, but develops, of course, completely differently.
We also studied the hyopalatine arch development of two other characiforms Lebiasina and Pyrrhulina, belonging to the South American family Lebiasinidae, which have been cited as lacking a QMF by Weitzman (1964), a character that has been considered a synapomorphy of this family by Vari (1995:24).We found an interesting and intriguing developmental pattern relating to the QMF in lebiasinids that seems to be exclusive to this clade.In early developmental stages of Lebiasina, a QMF is absent from the palatoquadrate cartilage unlike in most other characiforms, in which it is present from even the earliest stages (see Fig. 6 Ctenolucius, Fig. 9 Alestopetersius, Vandewalle et al. 2005;Walter 2013;Mattox et al. 2014).Instead, a fenestra develops in the palatoquadrate cartilage at a later stage by resorption of cartilage before ossification of quadrate and metapterygoid starts.Later in ontogeny, when the bones start to form, a thin membrane bone flange develops from the metapterygoid to cover the fenestra, fully closing it in the adult.Thus, the QMF in Lebiasina is transient in ontogeny.In Pyrrhulina, the condition seems more extreme because the QMF never develops in the palatoquadrate cartilage and the quadrate and metapterygoid develop in the solid, non-fenestrated cartilage as in non-characiforms otophysans.The condition in Lebiasina seems to be an intermediate character state between the regular characiform condition of a normal fenestra developing in the cartilage and that of the more derived lebiasinid condition, in which the QMF never develops.
Although widespread in the Characiformes, the QMF has been reported as absent in adults not only in some members of the family Lebiasinidae (see above, and Weitzman 1964;Vari 1995), but also in some Chilodontidae and Curimatidae (e.g., Curimatopsis, Vari 1989), and Anostomidae, such as Leporinus (Roberts 1969), Anostomus and Gnathodolus (Winterbottom 1980).The development of the hyopalatine arch in most of these taxa is currently unknown.Further study may be able to clarify whether absence of a QMF in the adult is the result of closure of the fenestra during development, as in Lebiasina, or true absence of the QMF from all developmental stages, as in Pyrrhulina.
Notwithstanding the absence of a QMF in some derived characiforms, the fact that it is present in the African distichodontids, citharinids, alestids, hepsetids and in members of most South American characiform families, indicates that it is a homologous structure in Characiformes and at the same time a convincing synapomorphy of this order of otophysan Ostariophysi.Based on the significant differences in development of the QMF in characiforms and in cobitid cypriniforms we conclude that the QMFs of the two taxa are not homologous.This is further supported by the phylogenetic position of the two groups, with cobitids deeply embedded within not only Cypriniformes but also Cobitoidei, and we concur with Sawada (1982) who suggested that the cobitid QMF is a character restricted to this group of Ostariophysi.
Among clupeiforms, a QMF has been illustrated by Ridewood (1905) in Alosa, and subsequently reported also in Brevoortia (Gosline 1975) and Jenkinsia (Arratia and Schultze 1991).Its presence has been confirmed by us in Alosa and Brevoortia (Fig. 14).Unfortunately nothing seems to be known about the development of this structure in these clupeoids, but absence of a QMF in Denticeps and most other clupeomorphs and all anotophysan Ostariophysi suggests that the QMF in these taxa is not homologous to the QMF in characiforms or cobitids.Further study of this structure in clupeoids is warranted.
Figure 1 .
Figure 1.The earliest illustration of the quadrate-metapterygoid fenestra (marked with yellow asterisk) in the hyopalatine arch of the South American characiform Serrasalmus rhombeus.Illustration from Rosenthal (1816). | 6,041 | 2023-01-23T00:00:00.000 | [
"Biology"
] |
An updated, computable MEDication-Indication resource for biomedical research
The MEDication-Indication (MEDI) knowledgebase has been utilized in research with electronic health records (EHRs) since its publication in 2013. To account for new drugs and terminology updates, we rebuilt MEDI to overhaul the knowledgebase for modern EHRs. Indications for prescribable medications were extracted using natural language processing and ontology relationships from six publicly available resources: RxNorm, Side Effect Resource 4.1, Mayo Clinic, WebMD, MedlinePlus, and Wikipedia. We compared the estimated precision and recall between the previous MEDI (MEDI-1) and the updated version (MEDI-2) with manual review. MEDI-2 contains 3031 medications and 186,064 indications. The MEDI-2 high precision subset (HPS) includes indications found within RxNorm or at least three other resources. MEDI-2 and MEDI-2 HPS contain 13% more medications and over triple the indications compared to MEDI-1 and MEDI-1 HPS, respectively. Manual review showed MEDI-2 achieves the same precision (0.60) with better recall (0.89 vs. 0.79) compared to MEDI-1. Likewise, MEDI-2 HPS had the same precision (0.92) and improved recall (0.65 vs. 0.55) than MEDI-1 HPS. The combination of MEDI-1 and MEDI-2 achieved a recall of 0.95. In updating MEDI, we present a more comprehensive medication-indication knowledgebase that can continue to facilitate applications and research with EHRs.
www.nature.com/scientificreports/ In 2013, we introduced the publicly available MEDication-Indication (MEDI) knowledgebase that integrates information from four public medication resources (RxNorm, SIDER 2, MedlinePlus, and Wikipedia) to identify relationships between medications and their indications, including both on-label and off-label indications 20 . MEDI described medications with RxNorm concept unique identifiers (RxCUIs) and indications with ICD-9-CM (International Classification of Diseases, Ninth Revision, Clinical Modification) codes. Our original study demonstrated that the combination of resources could provide a more comprehensive coverage of indications than any single resource alone without compromising precision 20 . This observation was supported by the development of Drug Evidence Base (DEB), a medication indication knowledgebase which took a similar approach to aggregating both indication and adverse effect information from several resources 21 .
The first version of MEDI, which we will henceforth refer to as MEDI-1, has been well-utilized in pharmaceutical and clinical research 5,8,[22][23][24] . However, the resource has become progressively outdated over the past seven years, limiting its usefulness 11 . For instance, some drugs are no longer commercially available in the U.S., and the FDA approved 220 novel drugs between 2015 and 2019 25 . The adoption of ICD Tenth Revision, Clinical Modification (ICD-10-CM) has also made MEDI-1 less applicable to modern EHRs. Furthermore, a number of the resources used to build MEDI-1 have been updated, such as SIDER 2, which has been updated to SIDER 4.1 7 .
In this paper, we present MEDI-2, an updated medication indication knowledgebase for biomedical research with modern EHRs. We built MEDI-2 with information from six public medication resources, including updated versions of the four original resources. With an updated manual review design, we evaluated the current release MEDI-2 and compared the precision and recall of MEDI-1, MEDI-2, and a combined MEDI-1 and MEDI-2 resource (MEDI-C).
Summary of MEDI-2.
A flowchart outlining the construction of MEDI-2 from the six resources is shown in Fig. 1. Briefly, medications from the resources were mapped to RxCUIs and indications were mapped first to United Medical Language System concept unique indentifiers (UMLS CUIs) and subsequently to ICD-9-CM and ICD-10-CM codes. The overlap between the six resources can be visualized in Fig. 2. Information was available from at least two resources for 2168 (71.6%) of these medications. Of note, 553 medications were found only in WebMD. However, many of these unique medications were traditional remedies, extracts, or oils such as coconut oil (RxCUI 1309239), Asian ginseng extract (RxCUI 1370774), or soy isoflavones (RxCUI 1807769).
Of the 3031 medications included in MEDI-2, we identified 4323 unique UMLS CUIs related to indications, giving 36,348 UMLS CUI medication-indication pairs. After mapping to ICD codes, there were 3072 unique ICD-9-CM codes and 5373 ICD-10-CM codes, resulting in 74,971 ICD-9-CM indications pairs and 111,093 ICD-10-CM indications pairs (Table 1). One medication can be paired with in an indication in both ICD-9-CM and ICD-10-CM. As shown in Fig. 3, a large proportion (77.8%) of these indication pairs were identified from only a single resource. , where amiloride has been tested only in pancreatic cancer cell lines 26 . Using a combination of resources excluding RxNorm, precision improved as we increased the threshold for the number of resources that contain the medication-indication pair. We selected the high-precision subset for MEDI-2 to include all medication-indication pairs from RxNorm or ≥ 3 resources. Therefore, MEDI-2 high precision subset (HPS) is composed of 2000 medications and 34,448 indication pairs, including both ICD-9-CM and ICD-10-CM indications.
Comparison of MEDI-1, MEDI-2 and MEDI-C. In our original study, MEDI-1 included 3112 medications and 63,343 medication-indication pairs 20 . After regrouping MEDI-1 using the same generic ingredient groupings that we used for MEDI-2, there were 2701 unique medications and 56,550 medication-indication www.nature.com/scientificreports/ pairs remaining in MEDI-1. The decrease in number of medication and medication-indication pairs was likely due to changes in RxNorm relationships. For example, 'morphine sulfate' (RxCUI 30236) and 'morphine hydrochloride' (RxCUI 235751) in MEDI-1 were both mapped to 'morphine' (RxCUI 7052) in MEDI-2, reducing the number of medication-indication pairs from 33 to 20 for this drug. A Venn diagram illustrating the overlap and differences between the medications included in MEDI-1 and MEDI-2 is shown in Fig. 4. There were 721 medications found only in MEDI-1 and not in MEDI-2, of which 254 (35.2%) are multi-ingredient medications, which we did not compare directly with MEDI-2 due to lack of standardization. Of the remaining 467 single-ingredient medications found only in MEDI-1, 79 medications were flagged by RxNorm as prescribable. In contrast, MEDI-2 has 1051 additional prescribable medications than MEDI-1, including 93 multi-ingredient medications and 652 (62.0%) prescribable single-ingredient medications Table 2. Estimated precision of MEDI-2 for different resource combinations. a Indications that the reviewers deemed were too ambiguous were excluded from analysis (e.g., ICD10CM R69 = Illness, unspecified). b HPS: High precision subset = indications from RxNorm or ≥ 3 resources. In a review of 50 medication-indication pairs found in MEDI-1 HPS but not MEDI-2 HPS, we observed that 10 (20%) of the reviewed pairs were found to be invalid. Additionally, 32 (64%) of the reviewed pairs had related or better indications in MEDI-2 HPS. For example, although the medication-indication pair 'albuterol' (RxCUI 435) and ' Acute bronchospasm' (ICD-9-CM 519.11) was found only in MEDI-1, MEDI-2 identified similar indications for albuterol including ' Acute bronchospasm' (ICD-10-CM J98.01) and 'Exercise induced bronchospasm' (ICD-9-CM 493.81). Of the remaining 8 pairs that were valid only in MEDI-1 HPS, 5 of the 8 medications are not currently prescribable in the U.S., including cefamandole, chlorphenesin, streptokinase, pemoline, and valdecoxib. Therefore, we also grouped the indications from both MEDI-1 and MEDI-2 into MEDI-C since the combination will provide a higher recall for research with historical clinical data.
Resource Medications Indications pairs Total reviewed a True positive Precision
The estimated precision and recall for MEDI-1, MEDI-2, and MEDI-C (the combination of MEDI-1 and MEDI-2) is shown in Table 3. The reported number of indication pairs for MEDI-2 are markedly greater than MEDI-1 since it includes both ICD-9-CM and ICD-10-CM indications. Both MEDI-2 and MEDI-2 HPS have similar precision and improved recall compared to MEDI-1 and MEDI-1 HPS, respectively. MEDI-C, the combined version of MEDI-1 and MEDI-2, has a much higher recall (0.95) compared to MEDI-1 (0.79) and MEDI-2 (0.89) alone. Similarly, we observed that MEDI-C HPS has improved recall (0.67) compared to MEDI-1-HPS (0.55) and MEDI-2-HPS (0.65) alone. These observations suggest there are medications or indications identified in MEDI-1 that are not available in MEDI-2, likely because some medications have been commercially withdrawn from U.S. as observed in our reviews.
Discussion
MEDI-2 is a comprehensive medication-indication knowledgebase prepared for biomedical research with modern EHRs. Leveraging information from six publicly available medication resources allows MEDI-2 to capture a broad range of medications and indications, improving precision and recall over any one resource alone 20 . Moreover, indications in MEDI-2 are represented with the widely-used ICD-9-CM and ICD-10-CM billing codes, allowing MEDI-2 to be easily utilized for research in many EHR systems. Compared to MEDI-1, MEDI-2 captures many more medications and indications, and also modernizes MEDI by capturing ICD-10-CM indications. Despite the sharp increase in medication-indication pairs in MEDI-2, our review showed that MEDI-2 has an overall improved performance compared to MEDI-1, improving recall (0.89 vs. 0.79) without sacrificing precision. Similarly, MEDI-2 HPS also increased the overall number of covered medications (2000 vs. 1764) with improved recall (0.65 vs. 0.55) compared to MEDI-1 HPS.We also observed that the precision for individual resources was better in MEDI-2 compared to MEDI-1. For instance, the precision for Wikipedia improved from 0.56 in MEDI-1 to 0.74 in MEDI-2. This may be due to improvements in the resources themselves or in our pipeline for extracting and mapping indications.
Our review showed that a significant portion of medication-indication pairs (64%) found only in MEDI-1 HPS had a similar or better indication in MEDI-2 HPS and an additional 20% found only in MEDI-1 HPS were invalid pairs. Notably, the review identified five drugs that are no longer prescribable in the U.S., which may still be valuable when conducting research with longitudinal and historical EHR data. Therefore, we are also releasing MEDI-C, which will incorporate MEDI-1 into MEDI-2 with a flag to indicate which resource the medicationindication pair is from. Our reviews found that MEDI-C achieved a much higher recall of 0.95 than MEDI-1 (0.79) and MEDI-2 (0.89) alone.
A notable obstacle for MEDI-2 was the mapping of indications from free text in the articles to ICD. When rebuilding MEDI, we observed that the UMLS CUIs that were extracted by natural language processing (NLP) for indications did not always map to ICD codes. For instance, a free text mention of 'breast cancer' would be extracted by NLP as 'Breast Carcinomas' (CUI C067822), which only maps to Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) concepts for breast cancer through the UMLS. We were able to recapture some ICDs by mapping the SNOMED CT concepts to ICD, but it is possible some indications were overlooked. We also observed that ICD-10-CM indications were more easily captured than ICD-9-CM indication, which are still useful for research with older EHRs. This was likely due to updates in the UMLS concept tables that were used to map the UMLS CUIs extracted from the articles to ICDs. By integrating MEDI-1 into MEDI-2 for MEDI-C, we are able to keep more ICD-9-CM indications, but further work is needed to refine the mapping of free-text indications to ICD. Additionally, we provide the indications represented in UMLS CUIs alongside ICDs for MEDI-2, which can be useful for NLP tasks with free-text clinical notes.
Several limitations to MEDI-2 should be acknowledged. First, MEDI-2, like MEDI-1, is limited to medications and indications found in the public resources. Public resources are not perfect; we observed lower precision from indications extracted from < 3 resources. While most of the publicly available medication resources had significant overlaps with each other, there were 553 medications identified only in WebMD. As a consumer-based resource, WebMD often includes supplements and alternative/homeopathic medicines that were not found in the other resources. In particular, WebMD discusses many extracts or essential oils where the benefits may have limited evidence.
Concept extraction via NLP still remains a challenge with potential misrecognitions or mixing with adverse effects. MEDI-2 is primarily focused single-ingredient medications and likely excluded some prescription or branded medications that include a combination of medications. There is less naming standardization for multiingredient medications, which causes difficulty when mapping to RxCUIs. Additionally, the reported precisions and recalls in this study are imperfect estimates, but the lack of a gold standard makes it difficult to efficiently assess resources as large as MEDI without estimation from manual review. However, the similar precision calculated from manual reviews for both MEDI-1 and MEDI-2 supports our precision estimation. Lastly, MEDI only reports the binary relationships for medications-indications pairs and does not include more granular detail about the relationships for each pair, such as distinguishing between preventative or therapeutic indications. Additionally, we made no judgements on the strength of evidence for off-label indications. Indications mapped from SIDER 4.1, which is derived directly from the Food and Drug Administration's structured product labels, may be considered as plausible evidence for 'on-label' indication 7 . Further work is needed to capture more detailed information in an automated manner.
In summary, MEDI-2 marks a significant improvement and expansion over our original medication-indication knowledgebase. Our results showed that incorporating new and updated resources enabled MEDI-2 to capture many additional medications and indications with greater recall. As a freely available and comprehensive resource, MEDI-2 can continue to enable in pharmaceutical and clinical research with EHRs.
Methods
Rebuilding MEDI with updated publicly available resources. We www.nature.com/scientificreports/ as "levothyroxine" and "disorder of thyroid gland" or "papaverine" and "erectile disorder. " SIDER 4.1 provided medication names as free text and indications as UMLS CUIs 7 . We mapped the SIDER 4.1 medication names to RxCUIs by string matching with the UMLS. Mayo Clinic, WebMD, and MedlinePlus all maintain directories of articles describing medications. We wrote a Python bot that automatically scraped the article titles and body text from these directories, excluding article subsections that were related to side effects or contraindications. We mapped the article titles to RxCUIs and combined articles with the same RxCUI. Articles that mapped to several RxCUIs contribute the same indications to each of the mapped medications. For Wikipedia, we extracted articles by querying Wikipedia's application programming interface using the RxCUI concept names (i.e., medication name). We used KnowledgeMap Concept Indexer to identify medical concepts defined by UMLS CUIs in each medication document 28 . KnowledgeMap Concept Indexer is a locally developed NLP pipeline that has been shown to effectively extract medical concepts in medical documents and online resources 20,28,29 , outperforming the National Library of Medicine's MetaMap NLP tool in precision and recall 28,30 . Medical concepts that were negated were excluded. We filtered the UMLS CUIs for the following semantic types: Disease or Syndrome, Congenital Abnormality, Acquired Abnormality, Anatomical Abnormality, Neoplastic Process, Virus.
The final version of MEDI-2 includes separate medication-UMLS CUI and medication-ICD code relationships. The identified UMLS CUIs from each resource were mapped to ICD-9-CM and ICD-10-CM codes with the UMLS concept tables. For CUIs that did not directly map into ICD but mapped to SNOMED-CT concepts, we used SNOMED-CT to ICD mappings from the National Library of Medicine (https:// www. nlm. nih. gov/ healt hit/ snome dct/ archi ve. html; accessed January 2020). For instance, the UMLS does not map 'Breast Carcinomas' (CUI C067822) to ICD codes but does map to the SNOMED CT concepts for breast cancer. For UMLS CUIs that mapped to several ICD codes, each ICD code was considered as unique indications. Based on relationships within RxNorm, all medication concepts were grouped by their generic ingredient when possible (e.g. 'tylenol' is in group 'acetaminophen'). Medications that included multiple active ingredients were mapped to a combined multi-ingredient generic when possible (i.e., 'tylenol with codeine' mapped to 'acetaminophen / codeine) or to their single-ingredient components if not. We additionally regrouped MEDI-1 to generic ingredients using the same groupings for MEDI-2 for consistency.
Evaluating MEDI-2. For MEDI-1, we demonstrated that combining multiple independent resources improved the precision of the medication-indication pairs 20,31 , We created a high-precision subset for MEDI-1 (MEDI-1 HPS) of medication-indication pairs that were either extracted from RxNorm, which already had high precision alone, or two or more other resources.
We estimated the precision of MEDI-2 to evaluate whether adding two additional resources would affect our threshold for the high-precision subset. First, an author with clinical background (NSZ) evaluated randomly selected subsets of 50 medication-indication pairs from each of the six resources used to build MEDI-2. The positive predictive value (PPV) for each resource was calculated by dividing the number of true positive medication-indication pairs by the total number of pairs reviewed. Reviewed medication-indication pairs that were found in more than one resource were included in the respective PPV calculation for each of the overlapping resources. Then, excluding RxNorm, we had two authors with a clinical background (VB and HNE) evaluate additional subsets of 50 medication-indication pairs derived from two, three, four, and five resources, respectively. Medication-indication pairs were deemed 'true' if the reviewers found evidence for the indication in UpToDate, clinical trials, or peer-review studies. Ambiguous indications, namely ICD codes that are too broad (e.g., ICD-10-CM R69 = 'Illness, unspecified'), were excluded from analysis. The reviewers used studies published in peer-reviewed journals and UpToDate (https:// www. uptod ate. com), an evidenced-based clinical resource commonly used by practicing clinicians, in their evaluation.
We estimated the precision of a combination of resources (R) with the following equation: where R is the set (combination) of reviewed resources r , size(r) is the number of medication-indication pairs in resource r , and PPV (r) is the estimated positive predictive value for resource r from the reviews.
Comparison of MEDI-1, MEDI-2, and MEDI-C. We estimated the precision of MEDI-1, MEDI-2, and MEDI-C (the combination of MEDI-1 and MEDI-2) using the above-defined precision equation. A board-certified physician clinician (VEK) also reviewed 50 medication-indication pairs that were found in MEDI-1 HPS, but not in MEDI-2 HPS. For each medication-indication pair in the review subset, the clinician also indicated whether a similar or better indication was in MEDI-2 HPS. In this update, we also designed an experiment to estimate the recall. We pre-selected five common medications that have multiple indications which span several domains: propranolol, methotrexate, sildenafil, gabapentin, and estradiol. A physician with board-certification in internal medicine (VEK) used UpToDate to curate a list of clinically-accepted on-label and off-label indications for the five medications. Then, the clinician reviewed the medication-indication pairs for the five medications from the three resources and indicated whether indications from their initial list were found in each resource.
Data availability
MEDI is made freely available for download at https:// www. vumc. org/ wei-lab/ medi. Code and scripts used to construct MEDI are made available upon request (wei-qi.wei@vumc.org). | 4,226.6 | 2021-09-23T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
A novel mitochondrial DnaJ/Hsp40 family protein BIL2 promotes plant growth and resistance against environmental stress in brassinosteroid signaling
Plant steroid hormones, brassinosteroids, are essential for growth, development and responses to environmental stresses in plants. Although BR signaling proteins are localized in many organelles, i.e., the plasma membrane, nuclei, endoplasmic reticulum and vacuole, the details regarding the BR signaling pathway from perception at the cellular membrane receptor BRASSINOSTEROID INSENSITIVE 1 (BRI1) to nuclear events include several steps. Brz (Brz220) is a specific inhibitor of BR biosynthesis. In this study, we used Brz-mediated chemical genetics to identify Brz-insensitive-long hypocotyls 2-1D (bil2-1D). The BIL2 gene encodes a mitochondrial-localized DnaJ/Heat shock protein 40 (DnaJ/Hsp40) family, which is involved in protein folding. BIL2-overexpression plants (BIL2-OX) showed cell elongation under Brz treatment, increasing the growth of plant inflorescence and roots, the regulation of BR-responsive gene expression and suppression against the dwarfed BRI1-deficient mutant. BIL2-OX also showed resistance against the mitochondrial ATPase inhibitor oligomycin and higher levels of exogenous ATP compared with wild-type plants. BIL2 participates in resistance against salinity stress and strong light stress. Our results indicate that BIL2 induces cell elongation during BR signaling through the promotion of ATP synthesis in mitochondria.
Introduction
Brassinosteroids (BRs) are plant steroid hormones that regulate various processes including plant growth and responses to environmental stress, among others. Mutants deficient in BR biosynthesis or signal transduction exhibit phenotypes such as de-etiolated hypocotyls and opened cotyledons in the dark and dwarfism with shortened leaves and stems in the light (Li et al. 1996;Li and Chory 1997). The activation of BR biosynthesis and signal transduction promotes hypocotyl and stem elongation and outward leaf curling. These results showed that BR is necessary for the positive cell elongation and growth of the plant . BR signaling factors are localized in various parts and organelles in the plant cell. BRs are perceived through a plasma membrane localized receptor BRASSINOSTER-OID INSENSITIVE 1 (BRI1), a leucine-rich repeat receptor-like serine/threonine kinase that functions in cell elongation (Kinoshita et al. 2005). BRI1 works with the negative regulator BRI1 KINASE INHIBITOR 1 (BKI1) and the positive regulator BRI1-ASSOCIATED RECEPTOR KINASE 1 (BAK1) Nam and Li 2002), which are anchored to the plasma membrane. BR signals received through these factors on the cell surface are transduced to bri1-5 SUPPRESSOR 1 (BSU1), which is a positive phosphatase (Mora- Garcia et al. 2004) and BRASSINOSTREROID INSENSITIVE 2 (BIN2), which is a negative kinase similar to the glycogen synthase kinase 3-like kinase in the cytosol. BIN2 kinase and BSU1 dephosphorylation activities regulate the phosphorylation of BRASSINOZOLE RESISTANT 1/Brz-INSENSITIVE-LONG HYPOCOT-YLS 1 (BZR1/BIL1) He et al. 2005;Asami et al. 2005) and bri1-EMS-SUPPRESSOR 1 (BES1) (Yin et al. 2002(Yin et al. , 2005, which regulate gene expression in the nucleus. Although BR signal transduction from the plasma membrane to the cytosol and nuclei has been described, the BR signaling factors in other organelles have not been clarified. ATP is well known as an energy source for the growth and life cycle in organisms from plant to animal. Plasma membrane P-ATPase in plants is activated by BRI1 kinase to induce cell wall expansion (Caesar et al. 2011). Reduced glutathione and dithiothreitol inhibited plant hypocotyl elongation; however, hypocotyl elongation could be recovered after treatment with ATP. NADPH oxidase activity and the endogenous production of nitric oxide positively affected hypocotyl elongation with the help of ATP treatment (Tonon et al. 2010). Translocase of the inner membrane 50 (TIM50) and translocase of the inner membrane 21 (TIM21) are mitochondrial proteins, and knock-out mutants of these genes showed a reduction of intracellular ATP levels and short hypocotyls in the dark Kumar et al. 2012). These results suggested that ATP plays an important role in plant growth, but a regulatory mechanism between the other plant growth regulator plant hormone and ATP itself is still unknown.
Brz is a specific inhibitor of BR biosynthesis. Brz treatment caused BR-deficient mutant like phenotype onto wildtype plant (Asami et al. 2000). Here, we have isolated and characterized an Arabidopsis mutant, Brz-insensitive-long hypocotyls 2-1D (bil2-1D) using activated tagging mutant lines by Brz. The bil2-1D mutant exhibited positively regulated growth in the hypocotyl, branch and root. bil2-1D responsible gene encodes a novel DnaJ/Hsp40 family protein that is localized in the mitochondria. In this manuscript, we attempted to identify the mitochondrial protein involved in plant growth under BR signal transduction.
Plant materials, growth conditions and stress treatments
Arabidopsis thaliana ecotype Columbia (Col-0) was used as the wild-type plant. Seeds were germinated on medium containing 1/2 Murashige and Skoog (MS) medium (Duchefa, Haarlem, The Netherlands) and 0.8 % phytoagar (Duchefa, Haarlem, The Netherlands) with 1.5 % sucrose and were subsequently transferred to soil. The plants were grown at 22°C under white light (a 16 h light/8 h dark cycle for long-day conditions). For the genetic analysis of double crossing, the bri1-5 (Wassilewskija[Ws-2]) mutant was used. Seeds of WT and bri1-5 mutant were obtained from ABRC (Arabidopsis Biological Resource Center, Ohio University, Columbus, OH, USA). For the ATP experiment, adenosine 5 0 -triphosphate disodium salt hydrate (ATP; Sigma-Aldrich, St. Louis, USA) at concentrations of 125 and 250 lM was added to 1/2 MS medium. The ATPase inhibitor oligomycin (Calbiochem, Darmstadt, Germany) was used at concentrations of 25 and 50 lM. The seeds were germinated in darkness for 7 days at 22°C. To induce salt stress, the seeds were germinated on 1/2 MS plates containing 0 and 125 mM of NaCl for 25 days. For the strong light stress analysis, the seeds were germinated on 1/2 MS medium under strong light (486.2 lmol m -2 s -1 ) for 25 days. Control plants were germinated in normal light (92.27 lmol m -2 s -1 ).
Screening for bil2-1D mutants Approximately 10,000 of the RIKEN GSC Arabidopsis activation tagging lines (Nakazawa et al. 2003) were screened on 1/2 MS medium containing 3 lM Brz (Asami et al. 2000). After growth for 7 days in the dark, seedlings with hypocotyls longer than the controls were identified and transferred to the soil. TAIL-PCR was used to amplify the flanking genomic sequences of the T-DNA of pPC-VICE4HPT, as previously described. Total RNA was extracted from the dark-grown 3-day-old seedlings of wild type and bil2-1D plants using an RNeasy Plant Mini Kit (Qiagen, Hilden, Germany). First-strand cDNA was synthesized with PrimeScript (Takara, Kyoto, Japan), and used in quantitative real-time PCR (qRT-PCR). The qRT-PCR analysis was performed according to the instructions provided for the Thermal Cycler Dice (Takara) using a SYBR Premix ExTaq system (Takara). The following genespecific primers were used for qRT-PCR analysis: for BIL2, 5 0 -TGAGTCCCTCAGGTCCCTTA-3 0 and 5 0 -GCCGCCTC CTTGGTAGAG-3 0 ; and for the constitutively expressed control gene ACT2, 5 0 -CGCCATCCAAGCTGTTCTC-3 0 and 5 0 -TCACGTCCAGCAAGGTCAAG-3 0 .
The resulting p35S-BIL2, BIL2 promoter:BIL2-GFP and BIL2 promoter and GUS fusion constructs were transformed into Col-0 using the floral dipping method. The transgenic plants were screened on 1/2 MS medium containing 25 mg/l of kanamycin.
Quantitative real-time PCR Total RNA was extracted using an RNeasy Plant Mini Kit (Qiagen) from light-grown, wild-type 24-(BIL2-OX) and 7-day-old (BIL2-RNAi) seedlings; dark-grown 5-day-old (genes flanking the T-DNA insertion site) seedlings of wild-type and bil2-1D plants; and wild-type and bil2-1D plants transformed with BIL2-OX or BIL2-RNAi. The firststrand cDNA was synthesized using PrimeScript (Takara) and was used in quantitative real-time PCR (qRT-PCR). The qRT-PCR was performed according to the instructions provided for the Thermal Cycler Dice (Takara) using the SYBR Premix ExTaq system (Takara).
GUS staining
For histochemical detection of GUS expression 2-, 3-, 11-and 28-day-old seedlings of promoter:GUS:BIL2 transgenic plants were used. The samples were stained at 37°C overnight in GUS staining solution as previously described (Ito and Fukuda 2002). To test the induction of GUS expression, 2-and 3-day-old transgenic seedlings were treated with brassinolide (BL) and Brz for 3 h.
Subcellular localization analysis by fluorescence microscopy
The roots of 5-day-old BIL2-GFP transgenic seedlings were harvested into a freshly prepared staining solution of 500 nM CM-H2XRos (MitoTracker Red; Invitrogen) for 15 min at room temperature. After staining, the seedlings were washed three times in 1/2 MS medium for approximately 10 min (Hedtke et al. 1999).
ATP measurement
Five-day-old dark mature seedlings were used. The samples were frozen in liquid N 2 after harvest and stored at -80°C. The samples were immersed in sterile water and boiled for 15 min at 98°C to destroy any ATPases . Total ATP content in the supernatant was determined using a luciferase-based assay (Kikkoman, Tokyo, Japan), and the luminescence was measured in a Spectra MaX luminometer (Molecular Devices, Sunnyvale, CA, USA). The ATP content was correlated with luminescence by comparison with the ATP standard provided in the kit.
Isolation and characterization of the bil2-1D mutant
Brz is a triazole-type compound that directly binds to the cytochrome P450 steroid C-22 hydroxylase encoded by the DWARF4 (DWF4) gene and specifically inhibits BR biosynthesis. Brz treatment reduces the BR content in plant and causes phenotypes with de-etiolation and dwarfism similar to BR-deficient mutants (Asami et al. 2000(Asami et al. , 2001. Mutants that are insensitive to Brz can be activated in BR signaling and biosynthesis; therefore, we screened the brz-insensitive-long hypocotyl (bil) mutant in Arabidopsis. We have isolated the bzr1/bil1 mutant from EMS-mutation lines, and identified BZR1/BIL1 with a bHLH transcription factor that acts as a positive regulator in BR signaling Asami et al. 2005;He et al. 2005).
In the dark, wild-type plants are typically etiolated with an elongated hypocotyl and a closed cotyledon. Treatment with Brz inhibits the elongation of hypocotyls and enhances the opening of the cotyledon in the wild-type plant, termed as de-etiolation in the dark. Under these growth bil2-1D mutant showed Brz resistance (a, b). a Hypocotyl elongation of wild-type (WT) and bil2-1D seedlings grown on medium containing 0, 1, and 3 lM Brz in the dark for 7 days. b Hypocotyl length of wild-type (WT) and bil2-1D seedlings grown on medium containing 0, 1 and 3 lM Brz in the dark for 7 days. The results are presented as the mean ± SE (n [ 30 seedlings). Triple asterisk indicates significant differences relative to the control at P \ 0.001 based on the Student's t test. c, d Phenotype of wild type and bil2-1D seedlings grown in soil under long-day conditions (16 h light, 8 h dark) for 30 days. Side view (c) and top view (d) conditions, we screened 10,000 Arabidopsis activation tagged lines and isolated a semi-dominant mutant, Brzinsensitive-long hypocotyls 2-1D (bil2-1D). bil2-1D mutant seedlings showed longer hypocotyls and closed cotyledons when grown in the dark on a medium containing Brz (Fig. 1a, b). In the dark without Brz, the hypocotyls of Alignment of BIL2 with 2 Arabidopsis thalina homologs c At2g42 g g 050 At2g42000 At2g42010 At2g42030 At2 A A g42070 70 7 At2g4 g g g g g g At2g42 2 2060 g At2g42040 Fig. 2 Novel gene is candidate of bil2-1D mutant. a T-DNA insertion site in bil2-1D is indicated with blue, deletion site with green and the BIL2 gene with a red arrow. b Quantitative real-time PCR analysis of At2g42080 mRNA expression in dark-grown, 3-dayold wild-type (WT) and bil2-1D seedlings. The value was normalized against the expression of the ACTIN2 gene. The error bars indicate standard deviation (n = 3). c Alignment of BIL2 (NP_181738) and its homologs in Arabidopsis thaliana, At3g58020 (NP_191361) and At2g18465 (NP_849977
Lateral root number
bil2-1D plants were elongated normally and were similar in length to the wild type. Light-grown bil2-1D mutants showed long petioles and outward-curling leaves similar to the overexpressed plants of the BR receptor, BRI1 (Wang et al. 2001) (Fig. 1c, d). These phenotypes suggested that bil2 mutants enhanced BR signaling.
Identification and characterization of the BIL2 gene
The co-segregation of the Brz-insensitive phenotype with the selection marker and TAIL-PCR indicated a T-DNA insertion in an intergenic region at the end of chromosome II in bil2-1D (Fig. 2a). The At2g42080 gene, located approximately 14.3 kbp downstream of the T-DNA insertion, was overexpressed (Fig. 2b). From T-DNA insertion to the At2g42080 gene, a 7.3-kbp gene was deleted (Fig. 2a). The knock-out mutant for At2g42030 (SALK_061281.28.80.x) and At2g42040 (SALK_007993) demonstrated the expression of each mRNA (Suppl. Fig. S1) and the same phenotype as the wild-type plant (data not shown). The increased expression of the At2g42060 and At2g42070 genes was also identified in bil2-1D compared with the wild type (Suppl. Fig. S1). The transformation and overexpression of genes for At2g42060 and At2g42070 in the wild-type plant showed the same hypocotyl and leaf phenotype as the wild-type plant (data not shown). These results suggested that the bil2-1D mutation is due to the mRNA overexpression of At2g42080 and the gene is named as BIL2.
BIL2 encodes a novel protein, but is categorized in DnaJ/Heat shock protein 40 (Hsp 40) family, which function as molecular chaperones (Rajan and D'Silva 2009). Ordinary DnaJ/Hsp40 gene family function has chaperone activity to repair the misfolding of nascent polypeptides and protein aggregation during stress (Hartl 1996). DnaJ/Hsp40 contains a J domain, which is essential for interaction with DnaK/Hsp70, and this domain is conserved in BIL2 (Fig. 2c, d). The J domain contains a highly conserved histidine, proline and aspartate (HPD) motif, which is critical for the function of these proteins (Cheetham and Caplan 1998). A BLAST search revealed that two homologous genes of BIL2 were present in Arabidopsis thaliana: At3g58020 and At2g18465 (Fig. 2c) and some homologous genes in Arabidopsis lyrata, castor bean (Ricinus communis), soybean (Glycine max), grape (Vitis vinifera) and rice (Oryza sativa) (Fig. 2d).
The BIL2 gene promotes plant growth To confirm that the overexpression of At2g42080 caused the bil2-1D phenotype, the At2g42080 coding region was placed immediately downstream of the CaMV 35S promoter and transformed into wild-type Arabidopsis. The obtained BIL2 over-expresser (BIL2-OX) showed long hypocotyls and closed cotyledons when grown in the dark on medium containing Brz (Fig. 3a, b).
To analyze the role of BIL2 for plant growth, BIL2-OX and BIL2-RNAi transformants were generated and observed in detail. bil2-1D contains two gene deletions (At2g42030 and At2g42040).
Although the deletion of the two genes did not affect the bil2-1D phenotype (Suppl. Fig. S1), the potentiation effect of the gene deletion must be avoided. Because the BIL2 over-expresser exhibits BIL2 function, we used these transformants for further analysis.
In plants grown in soil under light, the primary inflorescence length of BIL2-OX expressing BIL2 mRNA was longer than that of the wild-type plants (Fig. 3c-e). Although the number of primary inflorescences was similar between BIL2-OX and wild-type plants (data not shown), the number of secondary inflorescences and branches in BIL2-OX were increased compared with the wild-type plant (Fig. 3f, g). BIL2-OX showed longer roots and increased lateral root numbers compared with the wild-type plant (Fig. 3h-j).
In contrast, BIL2-RNA interference vector was constructed downstream of the CaMV 35S promoter and transformed into wild-type Arabidopsis (BIL2-RNAi). Two BIL2-RNAi lines showed that BIL2 mRNA was decreased compared with the wild-type plant (Fig. 4a), showing shorter hypocotyls than wild-type plants when grown in the dark with and without Brz. The shortened hypocotyl of the BIL2-RNAi plants showed a dose-dependent response to the Brz concentration (Fig. 4b, c). Compared with the Fig. 3 BIL2-OX increased the growth of the hypocotyl, inflorescence and roots. a Hypocotyl elongation of wild-type (WT), bil2-1D, BIL2-OX1 and BIL2-OX2 seedlings grown on medium containing 3 lM Brz in the dark for 4 days. b Hypocotyl length of wild type (WT), bil2-1D, BIL2-OX1 and BIL2-OX2 seedlings grown on medium containing 0, 1 and 3 lM Brz in the dark for 4 days. The results are presented as the mean ± SE (n [ 30 seedlings). c Real-time PCR analysis of the BIL2 gene expression in the wild-type (WT), BIL2-OX1 and BIL2-OX2 seedlings grown in the light for 24 days. The value was normalized against the expression of the ACTIN2 gene. The error bars indicate standard deviation (n = 3). d Phenotype of wild-type (WT), BIL2-OX1 and BIL2-OX2 seedlings grown in soil under longday conditions (16 h light, 8 h dark) for 40 days. e-g Measurements indicating primary inflorescence length (e), secondary inflorescence number (f) and branch number (g) of wild-type (WT), BIL2-OX1 and BIL2-OX2 seedlings grown on soil for 40 days. The results are presented as the mean ± SE (n [ 15 plants). h Root elongation of wild-type (WT), BIL2-OX1 and BIL2-OX2 seedlings grown on 1/2MS medium light 14 days. Wild-type (WT) is shown in white, BIL2-OX1 in pink and BIL2-OX2 in orange. i Primary root length of wild-type (WT), BIL2-OX1 and BIL2-OX2 grown on 1/2 MS medium in light for 14 and 21 days. j Lateral root number for seedlings grown in light for 14 days. Triple asterisk indicates significant differences relative to the control at P \ 0.001 based on the Student's t test wild-type plants, the BIL2-RNAi lines did not have decreased homologous genes of BIL2 in Arabidopsis thaliana (Suppl. Fig. S2). Although we could not obtain and analyze the BIL2 gene knock-out mutant, the BIL2-RNAi phenotype suggested an effect of the BIL2 single knockdown. Light-grown BIL2-RNAi transformants showed shorter inflorescence and a tendency of decreased branches compared with wild-type plants (Fig. 4d). These results suggest that BIL2 plays an important role in positive plant growth.
The BR-responsive gene was regulated in BIL2-OX and BIL2-RNAi Although the hypocotyl insensitivity of bil2-1D and BIL2-OX against Brz showed that BIL2 protein can be related to BR signaling, the relationship between BIL2 and BR at the molecular level has not been clarified. To reveal the importance of BIL2 for BR signaling, BR-stimulated gene expression in BIL2-OX and BIL2-RNAi transformants was analyzed. Quantitative real-time PCR (qPCR) showed that the BIL2 mRNA expression was higher in BIL2-OX ( Fig. 3c) but lower in BIL2-RNAi compared with the wildtype plant (Fig. 4a). The BR-positive regulatory gene THC4 showed higher expression, and the BR biosynthetic gene CPD, which is downregulated with BR stimulation through a feedback mechanism and showed lower expression in BIL2-OX than in the wild-type plant (Fig. 5a, b). In contrast, the BR-positive regulatory gene TCH4 showed lower expression, and CPD showed higher expression in BIL2-RNAi compared with the wild-type plant (Fig. 5c, d). These results suggest that BIL2 plays an important role for BR signaling in the regulation of BR gene expression. showed resistance against the BR biosynthesis inhibitor Brz (Fig. 1a, b), and functions upstream of BR gene expression (Fig. 5). To determine where BIL2 acts in BR signaling, the genetic interaction between BIL2-OX and the BR receptor mutant bri1-5 was analyzed (Li and Chory 1997). When BIL2-OX1 was crossed with bri1-5, BIL2-OX1 suppressed shorter hypocotyl of bri1-5 in the dark (Fig. 6a) and a shorter inflorescence of bri1-5 in the light (Fig. 6b). These results suggested that the BIL2 plays an important role downstream of BRI1 in BR signaling.
The BIL2 gene is expressed in many plant organs As BIL2 is a novel gene, the analysis of BIL2-expressed organs can reveal the function of BIL2 in plant growth. To investigate the specific expression of BIL2 in various developmental stages of Arabidopsis, the BIL2 promoter was constructed upstream of glucuronidase (GUS) and transformed into wild-type Arabidopsis. In 2 and 3 days after germination, BIL2 was highly expressed in the hypocotyl after BL treatment (Fig. 7b, e), and low expression was observed in the hypocotyl after Brz treatment (Fig. 7c, f) in comparison with the hypocotyl in 1/2 MS medium (Fig. 7a, d). The qPCR analysis showed that the BIL2 mRNA expression responses to BL and Brz were reproduced in the wild-type plants (Fig. 7g). The BIL2 promoter region does not have a responsive element regulated by BES1 and BZR1/BIL1; thus, an unknown functional element regulated by unknown transacting factors might exist in the BIL2 promoter. At 11 days after germination, BIL2 was highly expressed in the shoot apical meristem (SAM) and lateral root (Fig. 7h, i). At 28 days after germination, BIL2 was expressed in the flowering bud
Relative Quantity
Relative Quantity and pollen (Fig. 7j, k). These results suggested that BIL2 plays an important role for plant development in many stages of the plant life cycle.
The BIL2 protein localized in the mitochondria
To determine the subcellular localization of the BIL2 protein, a BIL2 promoter BIL2 gene-green fluorescent protein (GFP) fusion construct was transformed into wildtype Arabidopsis. Observation by confocal scanning of the BIL2-GFP transgenic seedling roots revealed that the BIL2-GFP was observed in a dot-like structure (Fig. 8a). When co-stained with Mitotracker red, which specifically stains mitochondria (Hedtke et al. 1999) (Fig. 8b), the two staining patterns overlapped (Fig. 8c). The dot-like structure shown by BIL2-GFP did not co-localize with FM4-64 as an endosome marker, XylT-RFP as a Golgi marker and HDEL-RFP as an ER marker (Suppl. Fig. S3). These results indicated that BIL2 was localized to mitochondria.
BIL2 may be involved in ATP synthesis in the mitochondria
Although the BR signaling factor localized in many plant organelles , the BR signaling factor in mitochondria had not been determined. To consider the molecular mechanism of plant growth regulation in BR signaling by the mitochondrial protein BIL2, we noted that ATP synthesis that was a major role of mitochondria for plant growth. The relationship between BIL2 and ATP in the hypocotyl elongation was then analyzed. Exogenous ATP promotes hypocotyl elongation and also restores hypocotyl elongation, which was shortened by Brz treatment in the dark (Fig. 9a). The hypocotyl elongation of Arabidopsis is inhibited by oligomycin, an inhibitor of the respiratory chain complex V of the mitochondrial electron transport and a blocker of H?-ATP synthesis. The hypocotyl of BIL2-OX showed partial but significant resistance against oligomycin-induced inhibition of hypocotyl elongation in comparison with the wild-type plant (Fig. 9b). To reveal the direct interaction between ATP synthesis and BIL2, the ATP concentration in the BIL2-OX and wild-type plant was analyzed. In dark-germinated plants for 5 days, the endogenous ATP in two lines of BIL2-OX was higher than in the wild-type plant (Fig. 9c). To analyze the relationship between ATP and BR signaling in Arabidopsis, BR-responsive gene expression was analyzed in the Arabidopsis wild-type plants after a 3-h ATP treatment (Fig. 9). In the plants germinated in the dark for 7 days, TCH4 expression, which is upregulated by BR stimulation, showed increasing dependence on the ATP concentration (Fig. 9d). By contrast, the BR biosynthetic gene CPD, the expression of which is downregulated by BR stimulation through a feedback mechanism, showed decreasing expression that was dependent on the ATP concentration (Fig. 9e). These results suggested that ATP synthesis was promoted through the action of BIL2 in the mitochondria and elongated the hypocotyl against the BR biosynthesis inhibitor Brz. BR signaling-related gene expression was also promoted by ATP synthesis through BIL2.
BIL2 increases salt tolerance and strong light tolerance in Arabidopsis
Although the BIL2 gene is a novel gene, it has been classified as a DnaJ gene family member in previous reports (Rajan and D'Silva 2009). The DnaJ gene family is included in heat shock proteins (HSPs) that play roles to protect against proteotoxic stress and to regulate the protein quality control system for assistance (Vos et al. 2008). To reveal the possible molecular function of BIL2, the environmental stress tolerance of the wild-type plant and BIL2-OX was analyzed. For the analysis of salt tolerance, plants grown on medium were divided into groups according to the fresh weight of the above-ground parts in plants grown in 1/2 MS medium and NaCl medium, and the numbers of each group were counted. The number of heavy and surviving plants of BIL2-OX grown on 125 mM NaCl was higher than wild-type plants (Fig. 10a, b). In the analysis of strong light stress tolerance, the number of heavy and surviving plants of BIL2-OX grown in the strong light (486.2 lmol m -2 s -1 ) was higher than wild-type plants (Fig. 11a, b). The reaction of BRI1-OX was similar to that of wild-type plants under salt stress (Fig. 10) and strong light stress (data not shown). These results suggest that Fig. 7 The BIL2 gene is expressed in many plant organs. BIL2::promoter::GUS expression patterns in transgenic Arabidopsis plants. Dark 2-and 3-day-old seedlings on the 1/2 MS medium (a, d) treated with BL 100 nM (b, e) and 3 lM Brz (c, f). Real-time PCR analysis of BIL2 gene expression in wild-type seedlings grown in the light for 10 days and treated with 100 nM of BL and 3 lM of Brz (g). The values were normalized against expression of the ACTIN2 gene. The error bars indicate the standard deviation (n = 3). Light-grown 11-day-old seedlings of the shoot apical meristem (SAM) (h) and root (i). Light-grown 28-day-old seedlings of flower bud (j) and pollen (l). Thus, BIL2 maintains its function to protect against proteotoxic stress and can be classified as a DnaJ family protein according to protein function. ATP treatment promoted salt tolerance in the wild-type plants (Suppl. Fig. S4), and the direct treatment of BL caused a weak salt tolerance in the wild-type plants (data not shown). These results showed that although ATP synthesis supported by BIL2 plays important roles, the direct BL signaling effect might be weaker than ATP itself.
Discussion
BR regulates many processes in plant growth and development, such as cell division and elongation (Clouse and Sasse 1998). BR biosynthesis-deficient mutants of Arabidopsis, de-etiolated 2 (det2) and dwrf4, (Li et al. 1996;Choe et al. 1998) showed a pleiotropic dwarf phenotype that can be recovered to a wild-type phenotype by feeding of BR ; however, the BR receptordeficient mutant bri1 displays a pleiotropic dwarf phenotype, including root elongation, which was not rescued by BR (Clouse et al. 1996). BR binds with BRI1, a member of the leucine-rich repeat kinase family (Li and Chory 1997;Kinoshita et al. 2005). The detailed mechanism downstream of BRI1 in BR signaling has been studied, and all these results have been revealed through loss-of-function mutants for the biosynthesis of BR and the BR receptor. BIN2, BZR1/BIL1 and BES1, the other BR signaling components, were identified through gain-of-function mutants.
Brz is a specific inhibitor of BR biosynthesis that inhibits the hydroxylation of the C-22 position of the side chain in BR by direct binding to DWF4 enzyme, a cytochrome P450 monooxygenase, through the triazole base of Brz (Asami et al. 2001). To analyze the mechanisms of BR signal transduction, we performed a chemical genetics screening using Brz from gain-of-function mutants. We screened Arabidopsis activation tagging lines and isolated the bil2-1D mutant, which displayed longer hypocotyls a Hypocotyl elongation by ATP. Hypocotyl lengths of dark 7-day-old seedlings grown in the absence or presence of 1 lM Brz or ATP (125 and 250 lM). Combinatorial treatments are indicated. b Hypocotyl lengths of dark 7-day-old seedlings grown in the absence or presence of oligomycin (OM) (25 and 50 lM). Twenty-five seedlings per treatment were analyzed in each experiment. The data shown are the mean ± SE. c Total ATP concentration in dark-grown 5-day-old seedlings of wild-type (WT), BIL2-OX1 and BIL2-OX2. The data shown are the mean ± SE of three independent experiments. d, e ATP can increase BR signaling. Real-time PCR analysis of regulated BR gene expression in wild-type (WT) seedlings grown in the dark for 7 days and treated with ATP for 3 h characteristic of cell elongation on medium containing Brz in the dark than the wild-type plant. Light-grown bil2-1D exhibited a long petiole phenotype similar to wild-type plants treated with BR or BRI1-OX mutants. The BR marker gene TCH4, the expression of which was upregulated by BR stimulation, was induced in the BIL2-OX but not in the wild-type. Conversely, CPD, the other BR biosynthetic gene, the expression of which is downregulated by BR stimulation, was suppressed in the BIL2-OX but not in the wild-type. BIL2-OX could suppress dwarfing of the BRI1-deficient mutant. BIL2 gene expression was induced by BL treatment and suppressed by Brz, suggesting that BIL2 is directly regulated by BL and is involved in cell elongation through BR signaling.
BIL2 is a novel protein, but analysis of the amino acid sequence revealed that it belongs to the DnaJ/Hsp40 family proteins. DnaJ/Hsp40 proteins are functional partners for DnaK/Heat shock protein 70 (Hsp70s) involved in protein folding, translation, stabilization and protein translocation across cell membrane. All members of DnaJ/Hsp40 contain a ''J domain'', which is essential for interaction with DnaK/ Hsp70s. The J domain contains a highly conserved histidine, proline and aspartate (HPD) motif, which is critical for their functions. Some members of the DnaJ/Hsp40 protein family contain other conserved regions, such as the glycine/phenylalanine rich region, termed the ''G/F region'' and a zinc-binding cysteine-rich sequence, termed the ''zinc-finger domain''. DnaJ/Hsp40 proteins are classified into three types on the basis of differences in these regions (Szyperski et al. 1994;Cheetham and Caplan 1998). Type I proteins contain all domains/motifs that include the J domain, the G/F region, and the zinc-finger domain. Type II proteins possess the J domain and the G/F region, but lack the zinc-finger domain. Type III proteins possess only the J domain (Kelley 1998;Fan et al. 2004;Walsh et al. 2004). BIL2 is classed as a type III J protein (Rajan and D'Silva 2009). DnaJ/Hsp40 is widely distributed in plants, animals and humans. The DnaJ/Hsp40 protein family is composed of six homologs in Escherichia coli, 22 homologs in Saccharomyces cerevisiae and 41 homologs in humans (Qiu et al. 2006). Arabidopsis has more than 400 DnaJ/Hsp40 protein families (Rajan and D'Silva 2009 repeat 2 (TPR2). The TPR proteins function in various cellular processes, including cell-cycle control, mitochondrial and peroxisomal protein transport, stress response, and protein kinase inhibition (Goebl and Yanagida 1991). DnaJ/Hsp40 proteins regulate DnaK/Hsp70 and other proteins as a function of chaperones. The DnaJ/Hsp40 protein TPR2 (DnaJC7) is involved in the folding of many proteins, and TPR2 mediates the retrograde transfer of substrates from Hsp90 to Hsp70 (Brychzy et al. 2003). BIL2-GUS expression was observed in hypocotyl during early stage and strongly expressed in pollen during in the flower developmental stage. BR biosynthetic and signaling deficient mutants showed reduced pollen number, viability, and release efficiency . MALE GAMETO-PHYTE DEFECTIVE 1 (MGP1) encodes the F A d subunit of mitochondrial F 1 F 0 -ATP synthase in Arabidopsis which was highly expressed in pollen and plays important roles in pollen formation .
BIL2 localization was detected in the mitochondria. Although BR signaling proteins are localized in many organelles, i.e., cellular membrane, nuclei, endoplasmic reticulum (ER) and vacuole , this study is the first to report BR signaling protein localization in the mitochondria. Our most interesting but difficult discovery during the BIL2 analysis was the localization of BIL2 in mitochondria. We would like to discuss regulation mechanism of BR signaling and plant growth by the mitochondrial protein.
ATP is a vital factor in plant growth, essentially representing a major energy source of the cell. Most of the plant ATP is primarily produced in the mitochondria and secondarily in the chloroplast (Haferkamp et al. 2011). Therefore, understanding the mechanisms involved in ATP synthesis in the mitochondria is important. We hypothesized that ATP synthesis in the mitochondria by ATPase promotes hypocotyl elongation, and ATPase folding or stabilization might be facilitated through BIL2 as a DnaJ/ Hsp40 function in the mitochondria. Although wild-type seedlings grown in medium containing Brz had shorter hypocotyls, BIL2-OX had a longer hypocotyl due to its resistance to Brz. The wild-type hypocotyl phenotype was recovered after treatment with ATP in medium containing Brz. These results showed that ATP plays an important role in hypocotyl elongation against Brz. The mitochondrial ATPase inhibitor oligomycin, which blocks the respiratory chain complex, inhibited hypocotyl elongation, but hypocotyl elongation of BIL2-OX was resistant to oligomycin compared to the wild-type plant. BIL2-OX produced higher exogenous ATP than the wild-type plant. These results support our hypothesis that BIL2 facilitates the folding or stability of ATPase in the mitochondria during plant growth (Fig. 12). Environmental stress causes misfolding, aggregation and degradation for each organelle protein in the plant. In plant mitochondria, these proteotoxic damages have been observed, and ATP generation plays an important role in stress resistance (Jacoby et al. 2011). BIL2-OX showed resistance against salinity stress and strong light stress. ATP treatment promoted salinity resistance in wild-type Arabidopsis grown in medium containing NaCl (Suppl. Fig. S4). BIL2 is classified as a member of the DnaJ/Hsp40 family (Rajan and D'Silva 2009), but the actual function of the BIL2 protein has not been elucidated. The effects of BIL2 resistance against environmental stress might support the function of BIL2 as a DnaJ/Hsp40 molecular chaperone in the plant mitochondria. Further analysis will reveal the function of BIL2 in detail. | 7,561.4 | 2013-03-15T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
ON NRBUL CLASS OF LIFE DISTRIBUTIONS
In this paper, moment inequalities for a new class of life distributions called new renewal better than used in Laplace transform order (NRBUL) are proposed. For the new class NRBUL, the preservation under convolution and mixture are studied. A new test statistic for testing exponentiality versus NRBUL is investigated based on these moment inequalities. Pitman asymptotic efficiencies of the test are proposed. The critical values of this test are tabulated. Some examples for censored and non-censored data are applied to the new test. Finally a new test for censored data is proposed.
Introduction
Certain classes of life distributions and their variations have been introduced in reliability theory, the applications of these classes of life distributions can be seen in engineering, social, biological science, maintenance and biometrics.Many statistician and reliability analysts proposed testing exponentiality versus some classes of life distribution.[1] studied the relation among the classes N RBU, RN BU, N RBU E and HN RBU E. [2] studied the moment inequality for the N RBU class.[3] proposed the U-statistic method for RN BU class.[4] proposed moment inequality for the RN BRU class.[5] studied the class N BRU E based on Laplace transform.The theme of this paper is to introduce a new class of life distributions, which is strictly larger than the new renewal better than used (N RBU ) class, called new renewal better than used in Laplace transform order (N RBU L ).Some properties of this class are studied and a test statistics for testing exponentiality versus this class is also proposed in cases of complete and right censored data.
Motivation and Definitions
Renewal survival function Consider a component with life time X with distribution function F (x), is put in operation.When the failure occurs, the component will be replaced by a sequence of mutually and identically components which are independent of the first component.In the long time, the remaining life distribution of a component in operation at time t is called the stationary renewal distribution.The corresponding renewal survival function is: we introduce the definitions of some of classes of life distributions.
In this paper, the properties of preservation for the N RBU L class are introduced in Section 2. In Section 3, the moment inequalities for the N RBU L class are derived.In Section 4, we test exponentiality versus N RBU L class.In Section 5, Pitman asymptotic efficiencies (PAE) of our test are considered.In Section 6, critical values for the lower and upper percentiles of our test are calculated.In Section 7 our test is applied to sets of real examples.Finally, testing for censored data is developed in Section 8.
Some Properties of the N RBU L Class
In this section some properties of N RBU L class are discussed under convolution and mixture.
Theorem 2.1.The N RBU L class is preserved under convolution.
Proof.The convolution of the two independent distribution functions F 1 and F 2 for the N RBU L class is given by: and this leads to which completes the proof.
The following theorem is presented to show that the N RBU L class is preserved under mixture.
Theorem 2.2.The N RBU L class is preserved under mixture.
Proof.If F α be a set of probability distributions, where the index α is governed by the distribution G, then the mixture Upon using Chebyschev inequality we get, and this leads to which completes the proof.
Moment Inequalities
In this section, the moment inequalities for the N RBU L class are derived and all moments are assumed to be exist and finite.
Theorem 3.1.If F is N RBU L, then for all integer r ≥ 0 and The right hand side of (3.2) is equal to (see [6]) The result follows from (3.2), (3.3) and (3.4).
Corollary 3.2.Putting r = 1, in (3.1) we get 4 Testing Exponentiality versus N RBU L Class In this section, we test H 0 : F is exponential distribution versus H 1 : F is N RBU L and not exponential distribution.When r = 1, we get the following measure of departure Note that under is asymptotically normal with zero mean and variance σ 2 as given in (4.4).
(ii) Under H 0 , the variance is reduced to where and Using U-statistic theory (see [7]), we get the variance σ 2 = V ar(φ(X)) of δn is Under H 0 , the variance σ 2 reduces to 5 The Pitman Asymptotic Efficiency (PAE) of δ In this section the Pitman asymptotic efficiencies (PAEs) are computed for the Linear failure rate family (LFR), Makeham and Weibull families.The PAE is defined by Where, Therefore,
Monte Carlo Null Distribution Critical Points
In this section, we calculate the lower and upper percentiles of δn given in (4.2) based on 10000 simulated samples of sizes n = 25, 27, 30(5), 40, 43, 45(5), 90.Table (6.1)gives these critical points of statistic δn at s = 0.4.It was found that δn = −22.433which is less than the tabulated value in Table 6.1.Then we recognize that this data has exponential property.
Example 7.2.Consider the following data set of 27 observations and represent the time of successive failure (in hours) of the air conditioning systems of 7913 jet air planes of a fleet of Boeing 720 jet air planes in [11].It was found that δn = −5.3835,which is less than the critical value in Table 6.1.Then we recognize that this data has exponential property.
Example 7.3.Consider the following data set in Kotz and Johnson and represent the survival times (in years) after diagnosis of 43 patients with a certain kind of leukemia (see [12]).It was found that δn = −12.4805,which is less than the critical value in Table 6.1.Then we recognize that this data has exponential property.
8 Testing Hypothesis versus N RBU L Alternative for Censored Data.
In this section, we propose a test statistic δn for testing exponentiality versus N RBU L class in case of randomly right censored samples.Suppose n objects are put on test, and X 1 , X 2 , .., X n denote their true life time.We assume that X 1 , X 2 , .., X n be independent, identically distributed (i.i.d.) according to a continuous life distribution F .Let Y 1 , Y 2 , .., Y n be (i.i.d.) according to a continuous life distribution G. Also we assume that X , s and Y , s are independent.Using the censored data (Z i , δ i ), i = 1, 2, 3, • • • , n, where Z i = min(X i , Y i ) and Let Z (0) = 0 < Z (1) < Z (2) ... < Z (n) denote the ordered Z , s and δ (i) is δ i corresponding to Z (i) .Then the product limit estimator of the survival function F is given by:(see Kaplan and Meier [13]) We propose the following test statistic where and The percentile points of our test δc n in (8.1) are calculated based on 10000 simulated samples of size n = 10(10)50, 51, 60, 70, 80, 81, 86.Here δc n = 1.91 × 10 337 which is more than the tabulated value in Table 8.1.Then we deduce that this data set has N RBU L property.
Example 8.2.The following data represent 51 liver cancers patients taken from Elminia cancer center Ministry of Health -Egypt, which entered in (1999) (in days).Out of these 39 represents non-central data, and the others represents censored data (see [15]).It was found that δc n = 1.4 × 10 303 which is more than the tabulated value in Table 8.1.Then we deduce that this data set has N RBU L property.
Example 8.3.On the basis of right-censored data for lung cancer patients from Pena (see [16]).These data consists of 86 survival times (in month) with 22 right censored.It was found that δc n = 1.45 × 10 225 which is more than the tabulated value in Table 8.1.Then we deduce that this data set has N RBU L property.
Fig. 5 .
1 shows the relation between s and efficiency of LFR, Makeham and Weibull families.
Figure 5 . 1 .
Figure 5.1.The relation between efficiencies and s Definition 1.1.A non-negative random variable X with survival function F (x) is new renewal better (worse) than used N RBU (N RW U ) if: F (x + t) ≤ (≥) W (t) F (x). Definition 1.2.A non-negative random variable X with survival function F (x) is renewal new better (worse) than used RN BU (RN W U ) if: W (x + t) ≤ (≥) W (x) W (t). Definition 1.3.A non-negative random variable X with survival function F (x) is new renewal better (worse) than used in expectation N RBU E(N RW U E) if: ∞ x W (u)du ≤ (≥)µ w e −x/µw .Definition 1.5.A non-negative random variable X with survival function F (x) is new renewal better (worse) than used in Laplace transform order N RBU L(N RW U L) if:
Table 5 .
We compare the Pitman asymptotic efficiencies PAEs at s = 0.4 of our test with some other tests.The results are shown in Table5.1.Table5.1 The PAE's for LFR, Makeham and Weibull families 1 shows that our class N RBU L is more efficient for all used alternatives.
Table 6 .
[10]itical values of statistic δn at s = 0.4In this section, δn have been calculated for real examples to illustrate the application of our test.Example 7.1.The data set of 40 patients suffering from blood cancer (Leukemia) from one of ministry of health hospitals in Saudi Arabia (see[10]).The ordered life times (in years) are
Table 8 .
1gives the critical values of statistic δc n at s = 0.4 | 2,263.8 | 2018-07-01T00:00:00.000 | [
"Mathematics"
] |
The ATP-dependent Clp Protease of Escherichia coli SEQUENCE OF clpA AND IDENTIFICATION OF A Clp-SPECIFIC SUBSTRATE*
The clpA gene, which codes for the ATP-binding subunit of the ATP-dependent Clp protease of Escherichia coli, has been sequenced. coding a of ClpA pre- protease of E. eoli, in the The clpA a Primer
The clpA gene, which codes for the ATP-binding subunit of the ATP-dependent Clp protease of Escherichia coli, has been sequenced. The coding region contains a single open reading frame for a protein of 758 amino acids; within the amino acid sequence are two consensus sequences for ATP-binding sites. The sequence of ClpA does not resemble that of other previously described ATPases or Lon, the other sequenced ATP-dependent protease of E. eoli, except in the ATPbinding site consensus region.
The clpA gene is expressed as a monocistronic message. Primer extension experiments define a major start point of transcription at -183 relative to the start of translation. A rho-independent terminator is located 23 bases beyond the end of the coding region.
The ClpA protein is degraded in uiuo in a Clp-dependent fashion (h/2 -60 min). A fusion protein containing the first 40 amino acids of ClpA fused in frame to &galactosidase is degraded very rapidly in a clpA+ host (tllz -3 min) but not in a clpA-host. This fusion protein is the first Clp-specific substrate described.
The rapid degradation of specific regulatory proteins is an important aspect of cellular control mechanisms (l-4). In addition, abnormal and damaged proteins as well as many foreign proteins introduced in cells by cloning or infection are rapidly degraded intracellularly. The turnover of these short lived proteins in Escherichia coli and other organisms is frequently energy dependent (1-S). It is increasingly evident that the energy dependence of protein degradation in E. coli in uivo is largely attributable to the action of proteases that are either totally dependent on or highly activated by ATP (9-12). Identifying the ATP-dependent proteases found in cells, studying their mechanisms of action, and determining the unique features of in viva substrates of these proteases should help our understanding of this mode of physiological regulation.
The best understood ATP-dependent protease is the Lon protease of E. coli (9, 10). Lon substrates in uiuo include the cell division inhibitor SulA, the X antiterminator N, and the positive regulator of capsule synthesis . Also, the degradation of many abnormal proteins is at least partially dependent on functional Lon protease in uiuo (7,8,(17)(18)(19)(20).
In vitro, purified Lon protease directly cleaves multiple peptide bonds in a variety of denatured proteins and in purified X N protein; maximal activity requires the continuous presence of ATP or an analog of ATP (9,10,15,21). In vitro studies imply that under physiological conditions, proteolysis by Lon is accompanied by hydrolysis of two ATPs/peptide bond cleaved (22). The deduced amino acid sequence of Lon protease reported recently by contains a sequence motif identical to the nucleotide-binding sites found in many ATP-binding proteins and ATPases. No other recognizable features in common with other proteases were noted. A second ATP-dependent protease of E. coli, which we have called Clp (called Ti by Chung and co-workers), has been identified and purified to homogeneity (11,12,24,25). Clp protease also directly cleaves peptide bonds in various proteins in a process that requires ATP hydrolysis. Clp differs from Lon in structure, in its in viuo substrates, and in the regulation of synthesis of the gene products. Lon is a tetramer composed of identical 87-kDa subunits, whereas the Clp protease consists of two dissimilar subunits, ClpA (81 kDa) and ClpP (21 kDa). ClpA has been shown to have ATPase activity and to bind ATP (24, 25). ClpP has been shown to be labeled by the serine protease inhibitor diisopropyl fluorophosphate and has low endopeptidase activity against small peptides in the absence of ClpA' (26). The Lon protease is regulated by htpR as part of the heat shock response (27, 28). ClpA is clearly not a heat shock protein (24), but its regulation has not been described. clpA mutants do not share the properties of lon mutants, although ClpA seems to contribute somewhat to the degradation of abnormal proteins in the absence of Lon. A comparison of these two energy-dependent proteases may give us the first opportunity to understand the essential elements of an energy-dependent protease system.
We report here the sequence of the clpA gene and its regulatory regions. We have found recently that ClpA contains two regions highly homologous with proteins from prokaryotic and eukaryotic cells'; each of these domains contains a consensus sequence for a nucleotide-binding site. Sequence similarities between Clp and Lon protease are restricted to a very short region ((50 amino acids) centered on the consensus ATP-binding motif. We have used translational fusions of Lac to ClpA to define a substrate fusion protein specifically degraded by Clp. I%. P-continued mined with 5,5'-dithiobis(nitrobenzoic acid) (39). Aromatic amino acids were determined spectrophotometrically (40) by second derivative UV spectroscopy of ClpA in 6 M guanidine hydrochloride. The molar concentration of ClpA was calculated from the concentration of tryptophan or tyrosine obtained from the second derivative of the UV absorbance and the tryptophan and tyrosine content of the protein obtained by sequence.
RESULTS
Sequence of &A-In our previous paper (24) we reported the cloning of ClpA and showed that the amino acid sequence of the amino-terminal portion of ClpA determined by protein sequence analysis agreed with that predicted from partial sequencing of the DNA of the cloned gene. The remainder of clpA has now been sequenced from two Ml3 clones, each carrying one of the strands of clpA. The internal primers used for sequencing are shown in Fig. 1. The entire clpA gene and surrounding regulatory region were sequenced from both strands; the sequence is shown in Fig. 2. In this sequence, the open reading frame beginning at base pair 1 codes for a protein of 758 amino acids of a molecular mass of 83,875 daltons. This is in good agreement with the estimated size of 81 kDa determined for the purified protein by SDS-acrylamide gel electrophoresis. Table I shows a reasonable agreement between the amino acid composition determined experimentally Ti and provided to us, are found within our predicted amino acid sequence. This agreement confirms that their purified Ti protein is in fact identical to ClpA and that the disagreement in amino acid composition probably results from an error with their earlier preparation.
The UV absorbance spectrum of ClpA in buffer B (24) at pH 7.5 showed a maximum absorbance at 278 nm and an absorption coefficient of 0.40 + 0.02 (mg/ml)-I.
The p1 of ClpA calculated from the sequence was 6.3.
The previously observed size of ClpA protein fragments from truncated copies of the gene and Akan insertions in the gene (24) agrees well with that predicted from the sequence. The Nru site, used as the start point for the clpA164 deletion, is located 300 base pairs upstream of the translation start (base pairs -297 to -292). The codon usage pattern for ClpA is not significantly different from that for general E. coli proteins (data not shown).
Transcription of clpA-The in vivo start points of clpA transcription from both the chromosome and a plasmid-borne gene were determined by primer extension, as described under "Experimental Procedures." Three relatively strong bands were detected which would predict transcription start points within 200 base pairs of the start of translation when chromosomal RNA was the template (Fig. 3, lanes a, b, and d). All three bands were increased about 20-fold when RNA from a transformant carrying multiple copies of the clpA gene was used (Fig. 3, lane e), and they were all absent when the RNA used was extracted from the AclpA164 mutant (Fig. 3, lane c). The first of the putative start sites begins at -183 from the translation start and is preceded by -10 and -35 regions that are reasonably close to consensus (underlined in Fig. 2 (A) or dideoxy TTP (ZJ. stem and 4-base pair loop, consistent with a transcription termination signal (Fig. 4). Predicted Structure for the ClpA Protein-ClpA has been shown to have ATPase activity in vitro and to interact with ClpA to activate ATP-dependent proteolysis' (24, 25). Therefore, ClpA would be expected to have an ATP-binding site. Examination of the sequence of ClpA revealed that the sequence Gly-X-Y-Gly-Val-Gly-Lys-Thr occurs twice, at amino acid residues 214-221 and 495-502 (underlined in Fig. 2). This sequence corresponds to part A of a two part ATP-binding consensus sequence (Fig. 5) This or a closely related version of the sequence motif is found in nearly all ATP-binding proteins examined to date (23). According to Walker, part B of the consensus has 3 or 4 hydrophobic residues (@) followed by aspartate or glutamate ((+JAsp/Glu) and appears 50-100 amino acids to the carboxyl-terminal side of part A. In ClpA, part B sequences are found at amino acid residues 281-286 and 560-564, about 60 amino acids away from their respective parts A (Fig. 2, lined regions). Data base analyses by Chin et al. (23) indicate that parts A and B of the consensus only occur in proteins that bind ATP. Thus, the occurrence of two such consensus motifs in ClpA strongly suggests that ClpA has two binding sites for ATP.
We have reported recently that the ATP-binding consensus sequences in ClpA are prominent features in two regions of the primary protein sequence defined by very close homology to a second E. coli Clp-like protein, ClpB, and to a group of proteins found in other bacteria, lower eukaryotes, and plants. ' Because each of these regions shows conservation of sequence with the corresponding regions in the different genes and the first of the regions is bounded in the plant genome by introns, we refer to these as domain 1 and domain 2. Domain 1 of ClpA (amino acid residues 183-415, coded for by nucleotides 547-1245), shares 54% identical and 88% similar amino acids with ClpB, and domain 2 (residues 420-609, coded for by nucleotides 1258-1827), shares 53% identical and 89% similar amino acids with ClpB. Conservation between ClpA and the Clp-like proteins from other organisms is virtually the same as that between ClpA and ClpB of E. coli. In contrast, homology between the sequences of the two domains of ClpA is limited to the two relatively short regions immediately surrounding the two parts of the ATP-binding consensus sequence (Fig. 5). Domain 1 is longer than domain 2 and contains elements found in the fi subunit of E. coli Fl ATPase not found in domain 2. Thus, although it is likely that both domains bind ATP, the differences between them suggest that the two domains have functionally distinct roles in the enzyme.
As reported by Chin et al. (23), Lon protease has a single ATP-binding consensus sequence. Remarkably, there are no extensive sequence homologies between Lon protease and ClpA outside of the narrow region around the ATP-binding consensus sequence. An alignment of the consensus sequences in ClpA, Lon protease, and several other ATPases from E. coli is shown in Fig. 5. There are no absolute conservations outside of the core consensus ((Gly/Ala)X,-Gly-Lys(Thr/ Ser)-space-@,(Asp/Glu)), although groups of proteins have identical amino acids in certain positions. All of the proteins have 2-3 hydrophobic amino acids within the 4 amino acids immediately preceding the first glycine/alanine in part A, and most of them have hydrophobic amino acids at positions 2-3, 5-6, and 8 following the threonine. It is worth noting that the basic amino acid often included in the consensus 5-8 amino acids before the first glycine (23) is not found in several of the proteins (and therefore may not be a necessary feature of such a site).
More extensive analyses of the sequences of ClpA and Lon have revealed few similarities that might reflect the common enzymatic properties of the two proteases. The spacings between part A and part B are similar in both domains of ClpA and in Lon protease, but this is not a unique feature of the proteases, since the spacing is similar also in DnaA, RecA, and NtrA. The region between the Gly-Lys-Thr and the @4-Asp in Lon is very basic (as it is in RecD, UvrD, and helicase) but is neutral in both domains of ClpA. Sequence alignments between ClpA and the other proteins in Fig. 5 were calculated by either BESTFIT or by SEQHP (42) using 140-160-amino acid long regions centered about the ATP-binding consensus sequences. The region in domain 2 of ClpA shows a better quality alignment with Lon than with other ATPases and aligns better with Lon than does the corresponding region of domain 1. Domain 2 of ClpA would thus appear to be evolutionarily more closely, albeit still quite distantly, related to Lon protease.
Secondary structure predictions for the regions around the ATP-binding consensus sequences in both domains of ClpA, RblA, a highly conserved homolog of ClpA found in Rhodopseudomonas blastica (43), Lon protease, and the p-subunit of Fl ATPase are shown in Fig. 6. The ATP-binding consensus sequences would be expected to be found in structures known to form the elements of a nucleotide-binding pocket or Rossman fold (44), the essential elements of which are p-sheet-Gly-Lys-Thr-loop-a-helix in part A, and a-helix-loop-+,-Psheet in part B. Domain 1 of ClpA and RblA conform reasonably well to the equivalent segments of the P-subunit of Fl ATPase (Fig. 6). Domain 2 shows some significant differences compared with domain 1 and appears to resemble more closely Lon protease in predicted structure, particularly around part B, where the predicted +&-sheet is very short and is followed by an a-helix terminating in a strong turn. The positions of predicted turns are quite similar in Lon and domain 2 of ClpA, which partially reflects the locations of proline residues in the primary sequence. The prolines in this region of ClpA The PEPPLOT program (42), which uses the algorithm of Chou and Fasman to calculate a-helix (m) and @sheet (0) propensities, was used to predict possible secondary structures in ClpA, RblA, a conserved homolog of ClpA from R. blastica (43), Lon protease, and the p-subunit of Fl ATPase from the respective amino acid sequences.
Regions for which cu-helices or p-sheets are equally predicted are shown as lifihtly shaded (Cl) and regions for which no preferred structure was predicted were assumed to be coils (-).
Turns (A) are indicated wherever strong predictions or several weak predictions of turns were made.
These similarities between domain 2 and Lon protease further suggest that these sites have equivalent functions in the respective enzymes.
Domain 1 of ClpA, on the other hand, has a number of features in common with the a-subunit of Fl ATPase, in addition to the structural similarities mentioned above. The positions of prolines are similarly spaced in the primary sequence, and most of the predicted turns are located in comparable locations (data not shown). Domain 1 shows a slightly better quality amino acid alignment with the P-subunit of Fl ATPase than does domain 2. As reported elsewhere,2 beyond the @,-Asp (part B of the consensus sequence) 2 tyrosines known to be located in the ATP-binding pocket of the @-subunit of Fl ATPase are found in the arrangement, Tyr-Xs-Thr-X1,l-Tyr, at positions +80 in domain 1 of ClpA and at +88 in Fl ATPase. The location of these residues in domain 1 suggests that, as with the @-subunit of Fl ATPase, ATP hydrolysis occurs at this site.
A site very similar to part B of the consensus sequence is found at about +115, measuring from the ad-Asp in ClpA, and at +133 in the /%subunit of Fl ATPase. It is followed by an a-helix and a region rich in basic residues. The occurrence of a second part B (Ile-Asp-Val-Ile-Asp in ClpA) was first noted by Craig Squires for the family of ClpA-like proteins.' That this second part B is highly conserved in ClpA-like proteins in other bacteria and in higher organisms implies that it is important for the integrity of ClpA. We have also found it in UvrB, UvrD, DnaB, and RecD, but it is not found in Lon protease, domain 2 of ClpA, the a-subunit of Fl ATPase, RecA, Rep helicase, or NtrA. The more extensive similarities between domain 1 of ClpA and the P-subunit of Fl ATPase serve to underscore the differences between the two domains of ClpA and support the conclusion that they probably have functionally distinct although interdependent roles in the activity of Clp protease.
ClpA-Lad Protein Fusions-Our previous work had demonstrated that clpA, in contrast to ion, is not regulated by the heat shock g factor, h&R (24). In order to study changes in expression of ClpA under different physiological conditions, we constructed an in-frame fusion between the 5'-terminal region of ClpA (up to base pair 121 of Fig. 2) and la&. This fusion encodes the amino-terminal 40 amino acids of ClpA followed by a g-amino acid linker joined to the 9th amino acid of P-galactosidase.
A transcriptional fusion of clpA to 11~2, carrying the same fragment of clpA, was also constructed. Both fusions contain 1000 base pairs in front of the ClpA translation start codon. Both fusions were transferred by homologous recombination to X RS45; these X derivatives were used to construct single copy lysogens of the clpA-lac fusions for the study of clpA synthesis and accumulation. Initial tests suggested a major difference between the expression of the translational fusion, SB84, in clpA' and clpA -hosts. Lysogens of the clpA-host were visibly more Lac+ than those in the clpA + host. The transcriptional fusion, SB85, did not show the same clpA-dependent difference. Such a difference could reflect either ClpA-dependent translational regulation of ClpA synthesis or the specific degradation of the ClpA-LacZ protein fusion by the Clp protease. Protein turnover experiments with the fusion demonstrate that the second possibility is true (Fig. 7), although translational regulation may also contribute to the difference. The ClpA-LacZ fusion ' C. Squires, personal communication. is degraded with a half-life of about 4 min in clpA + hosts but with a half-life greater than 20 min in a clpA -host. lon mutants have no effect on the accumulation of ClpA-LacZ (data not shown). Therefore, this fusion, which carries only the first 40 amino acids of ClpA, is degraded in a Clpdependent fashion. Since anti-b-galactosidase antibody was used to detect the fusion protein in the experiment of Fig. 7, it can be concluded that the entire LacZ portion of the protein is degraded, and no large intermediates accumulate. Instability of ClpA in Viuo-Given the instability of the ClpA-LacZ fusion and in vitro observations on the instability of ClpA activity, ' it seemed possible that ClpA is itself a substrate for Clp-dependent degradation.
The half-life of ClpA protein in uivo was determined by pulse labeling and immunoprecipitation of ClpA followed by gel electrophoresis and autoradiography (see "Experimental Procedures"). In wild-type cells, ClpA was degraded with a tllL of approximately 1 h (Fig. 8A). Although this rate of degradation is not as rapid as some of the regulatory degradation that occurs in E. coli, it is sufficient to remove almost half of the protease from the cell during each generation. Moreover, ClpA was stable in uiuo in a mutant lacking the proteolytic component of Clp protease, ClpP (Fig. 8A). Thus, active Clp protease is required for ClpA degradation in the cell. The half-life of ClpP was also determined by the same method; ClpP is not degraded in vivo (data not shown). In experiments with purified ClpA and ClpP, excess ClpA protein was rapidly degraded in an ATPdependent manner in a reaction that required both active ClpA and active ClpP.' Thus, it is likely that modulation of ClpA levels in the cell is accomplished at least in part by autodegradation of free ClpA subunits.
Accumulation of ClpA in cells was measured by running equivalent amounts of cell extract from cells grown to low, moderate, or high density on SDS-polyacrylamide gels, blotting, and immunochemical detection of ClpA. ClpA amounts/ unit of cells did not appear to vary more than IO-20% during exponential growth (Fig. 8B). Examination of the accumulation of /3-galactosidase in clpA-lac fusions suggests that the synthesis of ClpA increases when the cell density reaches an Asoo = 0.5 (data not shown). Therefore, the combination of any changes in synthesis rate with growth and degradation of ClpA results in a relatively constant amount of ClpA in the cell.
Given the instability of both ClpA and the ClpA-LacZ fusion, it seems reasonable to suggest that the amino terminus of ClpA acts as a recognition region for Clp-dependent degradation. The degradation may be prevented when ClpA is in a proper complex with ClpP; the ClpA-LacZ fusion protein, which is presumably unable to form such a complex, may be rapidly degraded because it is always free.
DISCUSSION
The clpA structural gene and regulatory elements have been sequenced and the start points of transcription for the gene determined. The gene does not seem to be part of a larger operon. This also seems to be true of km, another ATPdependent protease of E. coli. clpP, the gene for the second component of the Clp protease, has been mapped by us3 to min 10 on the E. coli chromosome, far from clpA.
The two consensus nucleotide-binding sites of ClpA resemble the single site in Lon and other ATPases from E. coli; explicit amino acid homologies to Lon do not extend significantly beyond these sites, except for the disposition of proline residues noted. Nevertheless, we would tentatively conclude that domain 2 of ClpA is more likely to have structural and functional similarity to Lon. Although secondary structure predictions based on sequence alone are not entirely reliable, the consistency with which the features found in nucleotidebinding domains are predicted for the ATPases discussed in this paper makes it more likely that the agreements in structure between Lon and domain 2 of ClpA and between @subunit of Fl ATPase and domain 1 of ClpA are significant. What the functional equivalence of domains 1 and 2 to these other proteins means in mechanistic terms is not yet clear.
ClpA possesses a basal ATPase activity and a substrateactivated ATPase activity.' Preliminary results in vitro indicate that it is possible to inhibit protease activity and protease-stimulated ATPase activity without affecting basal ATPase activity. In light of the two putative ATP-binding sites identified by sequencing, it is possible that basal ATPase activity occurs in one domain and protease-stimulated ATPase activity occurs in the other. Alternatively, one domain could contain the catalytic site, and the second domain could have an allosteric ATP-binding site, occupancy of which is required for ATPase activity at the other site. Although both models are equally possible, we favor the former, inasmuch as both Fl ATPase and Lon protease, which appear by sequence analysis to be functionally analogous to domains 1 and 2, respectively, each possesses intrinsic ATPase activity. Experiments are in progress to demonstrate the binding of ATP to both sites in ClpA and, by site-directed mutagenesis, to alter the activity of each of the possible active sites.
The finding that cZpA does resemble another gene in E. coli as well as genes in a variety of prokaryotic and eukaryotic organisms' suggests that the clpA organization may in fact turn out to be a more generally used motif for ATP-dependent proteases than the ion organization.
The cellular function of the Clp protease is not yet clear. The widespread conservation of Clp-like protease genes may suggest that this protease is responsible for a fairly general and central housekeeping function rather than for the degradation of specific substrates that may be unique to specific organisms. It is fairly unusual to find families of duplicated genes in E. coli. The other examples thus far detected include the ribosomal RNA genes and tufA and tufB, essentially identical genes for translation initiation factor (45). ClpA and ClpB differ for the amino-terminal 200 amino acids and in a central domain of 180 amino acids that cZpB but not clpA contains. It seems possible that the diverged amino termini of these proteins reflect different targets. If the sensitivity of the fusion protein containing the first 40 amino acids of ClpA to Clp-dependent degradation is due to recognition of this amino-terminal sequence, Clp interaction with its substrates may depend on amino-terminal sequences. Recognition of the amino-terminal amino acids is one component of the recognition of substrates for the ubiquitin-dependent degradation of substrates in eukaryotic cells (46). Since ClpA retains an amino-terminal methionine, the recognition of Clp substrates must take into account sequences beyond the amino-terminal amino acid.
clpA synthesis does not increase on heat shock as ion synthesis does (24). Instead, the pattern of expression of clpA-la& translational fusions may suggest that accumulation increases when cells are at midlogarthimic growth, when oxygen levels begin to fall (data not shown). The conditions for optimum ClpA synthesis, and possibly for optimum Clp activity, are apparently very different from the heat shock, aerobic conditions that favor the synthesis of Lon and the increased degradation of Lon substrates, and may provide some hint of the role of Clp for E. coli. | 5,832.4 | 1990-05-15T00:00:00.000 | [
"Biology"
] |
A Genome-Wide Association Study Identifies New Loci Involved in Wound-Induced Lateral Root Formation in Arabidopsis thaliana
Root systems can display variable architectures that contribute to nutrient foraging or to increase the tolerance of abiotic stress conditions. Root tip excision promotes the developmental progression of previously specified lateral root (LR) founder cells, which allows to easily measuring the branching capacity of a given root as regards its genotype and/or growth conditions. Here, we describe the natural variation among 120 Arabidopsis thaliana accessions in root system architecture (RSA) after root tip excision. Wound-induced changes in RSA were associated with 19 genomic loci using genome-wide association mapping. Three candidate loci associated with wound-induced LR formation were investigated. Sequence variation in the hypothetical protein encoded by the At4g01090 gene affected wound-induced LR development and its loss-of-function mutants displayed a reduced number of LRs after root tip excision. Changes in a histidine phosphotransfer protein putatively involved in cytokinin signaling were significantly associated with LR number variation after root tip excision. Our results provide a better understanding of some of the genetic components involved in LR capacity variation among accessions.
INTRODUCTION
Strong modulation of root system architecture (RSA) by environmental cues, such as nutrient and water availability has been a well-documented process in Arabidopsis thaliana (Giehl and von Wirén, 2014;Robbins and Dinneny, 2015). In primary roots (PRs) (PRs), a regular pre-branching pattern of lateral roots (LRs) is established by an endogenous periodic oscillation in gene expression near the root tip (Moreno-Risueno et al., 2010). A subset of xylem pole pericycle cells within the pre-branch sites becomes specified as LR founder cells. Subsequently, LR founder cells undergo a self-organizing and non-deterministic cell division patterning (Lucas et al., 2013;von Wangenheim et al., 2016) to initiate a LR primordium that eventually emerges through the PR tissues (Peret et al., 2009;Du and Scheres, 2018). However, the developmental progression of individual LR primordia is dependent on environmental cues, such as water distribution within the soil (Bao et al., 2014). In addition, a local auxin source from the LR cap of the PR, which is derived from the auxin precursor indole-3-butyric acid (IBA), determines whether a pre-branch site is specified or not (Xuan et al., 2015). The spatial distribution of LRs is not fixed, yet the total number of LR competent sites was stable with time. Root tip excision promotes the developmental progression of nearly all pre-branch sites toward LR emergence, providing an accurate measure for LR branching capacity. This later approach will allow assessing whether changes in LR pre-patterning have occurred in different genotypes and/or growth conditions (Van Norman et al., 2014). These results are in agreement with the current view that cells at the root tip are capable of integrating information about the local soil environment, tailoring the RSA for optimal nutrient and water uptake or after PR damage (Robbins and Dinneny, 2015).
Genome-wide association (GWA) studies have contributed to the identification of natural variation in key genes controlling PR growth under control and abiotic stress conditions (Meijon et al., 2014;Slovak et al., 2014;Satbhai et al., 2017;Bouain et al., 2018). Natural variation in RSA has also been reported (Rosas et al., 2013). Salt-induced changes in RSA were associated with more than 100 genetic loci identified by GWA mapping, some of which are involved in ethylene and abscisic acid (ABA) signaling (Julkowska et al., 2017). In addition, strong additive effects of phosphate starvation on LR density and salt stress on LR length were found in a recent study with a large number of Arabidopsis accessions (Kawa et al., 2016). Their results suggested that the integration of signals from phosphate starvation and salt stress might partially rely on endogenous ABA signaling. One of the candidate genes identified in these studies was HIGH-AFFINITY K + TRANSPORTER1 (HKT1), previously identified for its role in salinity tolerance by modulating sodium/potassium homeostasis (Munns et al., 2012). The targeted HKT1 expression to pericycle cells reduced LR formation under salt stress (Julkowska et al., 2017). Recently, Ristova et al. (2018) reported a comprehensive atlas of RSA variation upon treatment with auxin, cytokinin (CK) and ABA in a large number of A. thaliana accessions. In that study, hierarchical clustering analyses identified groups of accessions sharing similar or diverse responses to a particular hormone perturbation that can be very useful to identify accessions that behave differently than the bulk and to use them as parents for QTL mapping.
To explore the natural variation of LR branching capacity in Arabidopsis (Van Norman et al., 2014), we performed a wound-induced LR formation assay in 174 accessions from the Haplotype Map (HapMap) collection (Weigel and Mott, 2009). GWA mapping using data from 120 accessions revealed 162 SNP associations with several RSA traits measured after root tip excision. SNPs affecting six genes were found significantly associated with LR number variation.
Plant Materials and Growth Conditions
Our population for GWA mapping consisted of 174 natural inbred lines (i.e., accessions) of A. thaliana (L.) Heyhn. selected from the 1001 Genomes Project (Weigel and Mott, 2009) based on marker information and seed availability (Supplementary Table S1). The laboratory strain Columbia-0 (Col-0) was chosen as the reference. The following lines (in the Col-0 background) were used to isolate T-DNA homozygous mutants of the studied genes: N572850, N586312, N616200, and N620707 (Supplementary Table S2). The ahp1 ahp2 ahp3 (Hutchison et al., 2006) mutants were also used. All lines used were obtained from the Nottingham Arabidopsis Stock Centre (NASC 1 ). Seeds were stored at 4 • C for several weeks (>12) to break dormancy.
Seeds were surface-sterilized in 2% (w/v) NaClO and rinsed with sterile water before being transferred to 120 mm × 120 mm × 10 mm Petri dishes containing 75 mL of one-half-Murashige & Skoog (MS) medium with 2% sucrose, 8 g/L plant agar (Duchefa Biochemie, Netherlands) and 1× Gamborg B5 vitamin mixture (Duchefa Biochemie). After 4 days of stratification at 4 • C in darkness, plates were wrapped in aluminum foil and were transferred (0 days after sowing) to an MLR-352-PE growth chamber (Panasonic, Japan) at 22 ± 1 • C during 3 days in a nearly vertical position. Plates were unwrapped (3 days after sowing) and grew during another 3 days with continuous light (50 µmol·m −2 ·s −1 ). For each accession, 12 seeds were sown per petri dish in triplicate (36 samples/line). Sixteen consecutive sowings including 11 accessions each were established. Additionally, Col-0 was also included in all the sowings to be used as the growth reference accession and for normalization purposes (Supplementary Figure S1A). Lines with germination percentage lower than 80% and with ambiguous marker information were discarded for further analysis (Supplementary Table S1).
Induction of Lateral Root Formation
To induce LR development during early seedling growth (Van Norman et al., 2014), we excised about 2 mm of the root tip using a sterile scalpel on a laminar flow hood at 6 days after sowing ( Figure 1A). Next, samples were transferred back to the growth chamber and followed during 4 days for the analysis of several root traits as described below.
Image Processing and Parameter Measurement
Petri dishes were daily imaged from 6 to 10 days after sowing using an Epson Perfection V330 Photo scanner (Seiko Epson Corporation, Nagano, Japan), at a resolution of 600 dpi, and saved as RGB color images in JPEG file format. Scanned images were processed using EZ-Rhizo (Armengaud et al., 2009) with available plug-in macros to convert them into binary images, remove noise, gap-filling, and skeletonize them prior to automated root detection (Supplementary Figures S1B-S1E). PR length was directly obtained from the EZ-Rhizo output files from scanned images of 6 days after sowing. A highly significant and positive correlation was found for PR length estimated by EZ-Rhizo and directly measured by ImageJ software 2 from hand-drawn roots (Supplementary Figure S1F). LR number was visually counted from the scanned images between 6 and 10 days after sowing. LR emergence onset corresponds to the day when the first newly emerged LR was visible. LR density was estimated at 8 and 10 days after sowing by the LR number/PR length ratio. Data values were normalized relative to Col-0 values in each sowing dividing each individual value by the Col-0 average (Supplementary Table S3).
Statistical Analyses and Heritability Estimation
Statistical analyses of the data was performed by using StatGraphics Centurion XV (StatPoint Technologies, United States) and SPSS 21.0.0 (SPSS Inc., United States) software packages. Data outliers were identified based on aberrant standard deviation values and were excluded for posterior analyses as described elsewhere (Aguinis et al., 2013). One-sample Kolmogorov-Smirnov tests were performed to analyze the goodness-of-fit between the distribution of the data and a theoretical normal distribution. Non-parametric tests and data transformation were applied when needed. To compare the data for a given variable, we performed multiple testing analyses with the ANOVA F-test or the Fisher's LSD (Least Significant Differences) methods. Significant differences were collected with 5% level of significance (P-value < 0.05) unless otherwise indicated.
The broad-sense heritability (H 2 ) for the studied dataset was calculated as H 2 = σ G 2 /(σ G 2 + σ GE 2 /e +σ 2 e 2/re), in which σ G 2 , σ GE 2 , σ e 2 , r, and e represent the estimated variances for the genetic effects, genotype-environment interactions, random errors, number of replications (12) and number of environments (three), respectively. The estimated variances for σ G 2 , σ GE 2 , and σ e 2 were obtained by ANOVA using normalized data values as regards to the Col-0 reference accession from 106 of the studied lines.
Population Structure Analysis
The population structure of the selected accessions was estimated using the Bayesian model-based clustering algorithm (Porras-Hurtado et al., 2013) implemented in Structure v2.3.4 software 3 (Falush et al., 2003). To this end, we used a collection of 319 randomly selected bi-allelic synonymous (likely evolutionaryneutral) SNP markers from available sequence data (Atwell et al., 2010;Cao et al., 2011;Seren et al., 2012). An in-house Matlab script (Supplementary Table S4) was used for single nucleotide polymorphism (SNP) data selection and file formatting. Structure analysis was performed for K = 1 to K = 10 clusters with 20 replicates and 50,000 burn-in period iterations, followed by 50,000 Markov chain Monte Carlo iterations while using a population admixture ancestry model. To determine the most likely number of subpopulations (K) we applied the K method, as described elsewhere (Evanno et al., 2005).
Genome-Wide Association Studies
Genome-wide association mapping was performed using the GWAPP web interface 4 , which contains genotypic information for up to 250,000 bi-allelic SNP markers (Seren et al., 2012). GWAS was conducted for the studied traits using the linear regression model (LM) to identify associations between the phenotype of 120 studied accessions and the 205,978 SNPs available in the database. Relative LR numbers were transformed using the y = √ x function to fit the theoretical normal distribution. Association mapping was obtained excluding from the analyses all SNPs with a frequency <0.12. SNPs with a −log 10 (P-value) > 6.5 were considered significantly associated to the studied trait (Supplementary Table S5). Manhattan plots, representing the genomic position of each SNP and its association [−log 10 (P-value)] with the studied trait, were downloaded from the GWAPP web interface. We analyzed the sampling bias on GWAS by systematically removing one or several geographically isolated accessions and found that it did not make any difference to the detected SNP associations (Supplementary Table S6). We selected non-synonymous SNPs with a −log 10 (P-value) > 6.5 (P-value = 3.16 × 10 −7 ) for further studies.
Genotyping
Seedlings with T-DNA homozygous insertions in the annotated genes were identified by PCR verification with T-DNA specific primers and a pair of gene-specific primers (Supplementary Table S2). Genomic DNA isolation and genotyping of T-DNA insertion loci PCR were performed as described elsewhere (Pérez-Pérez et al., 2004).
Gene Expression Analysis by Real-Time Quantitative PCR
Primers amplified 81-178 bp of the cDNA sequences (Supplementary Table S2). To avoid amplifying genomic DNA, forward and reverse primers bound different exons and hybridized across consecutive exons.
RNA extraction and cDNA synthesis were performed as described elsewhere (Villacorta-Martín et al., 2015). For realtime quantitative PCR, 14 µl reactions were prepared with 7 µl of the SsoAdvanced Universal SYBR Green Supermix (Bio-Rad, United States), 4 µM of specific primer pairs, and 1 µl of cDNAand DNase-free water (up to 14 µl of total volume reaction). PCR amplifications were carried out in 96-well optical reaction plates on a Step One Plus Real-Time PCR System (Applied Biosystems, United States). Three biological and two technical replicates were performed for each gene. The thermal cycling program started with a step of 10 s at 95 • C, followed by 40 cycles (15 s at 95 • C and 60 s at 60 • C), and the melt curve (from 60 to 95 • C, with increments of 0.3 • C every 5 s). Dissociation kinetics of the amplified products confirmed their specificity.
Primer validation and gene expression analyses were performed by the absolute quantification method (Lu et al., 2012) by using a standard curve that comprised equal amounts from each cDNA sample. The housekeeping At4g26410 gene (RGS1-HXK1 INTERACTING PROTEIN 1, RHIP1) (Czechowski et al., 2005) was chosen as an internal control and to ensure reproducibility. In each gene, the mean of fold-change values relative to the Col-0 reference genotype was used for graphic representation. Relative expression values were analyzed using SPSS 21.0.0 (SPSS Inc., United States) by applying the Mann-Whitney U-test for statistical differences between cDNA samples (P-value < 0.05).
Natural Variation of Wound-Induced Lateral Root Formation
To validate our experimental approach (Figure 1A), we studied wound-induced LR formation in Columbia-0 (Col-0) during 7 days. PR length remained almost invariable after root tip excision during the whole experiment (Supplementary Figure S1G). The new LRs were already visible at 1 day after PR tip excision (1 dae) and reached 4.97 ± 2.36 (n = 467) LRs at 4 dae ( Figure 1B). At 7 dae, the number of LRs slightly increased but it was not possible to measure it unambiguously due to overlap between the LRs of adjacent seedlings (Figures 1B,C). We did not observe a clear spatial pattern of LR emergence from the PRs except that, in all cases, the new LRs emerged from its convex side ( Figure 1C, inset). We found a slight variation in PR length and LR number between the different sowings (Supplementary Figures S2A,B), which might be caused by subtle environmental differences at the growth chamber.
We studied wound-induced LR formation in a collection of 173 additional accessions selected from the 1001 Genomes Project (Weigel and Mott, 2009; Supplementary Table S1). In 34 of the studied accessions, the germination percentage at 6 days after sowing was lower than 80% and were discarded for further analysis; other 20 accessions were also discarded because of ambiguous genotypes at the GWAPP web interface (Seren et al., 2012) (Supplementary Table S1). We found variation in all the studied traits (Supplementary Figure S2B). Exceptionally, one or two LRs were observed before root tip excision (0 dae) in some samples, but these were not considered. The broad-sense heritability (H 2 ) was calculated for each of the studied traits (see section "Materials and Methods"). Heritability estimates ranged between 0.90 (LR emergence onset and LR density) and 0.95 (LR number). Broad-sense heritability for PR length was 0.93. Interestingly, we found a positive and significant correlation between LR number and PR length at 4 dae (r = 0.83; Figure 2A), as well as a negative and significant correlation between LR number at 4 dae and LR emergence onset (r = −0.69; Figure 2B). LR number ranged from 1.38 ± 0.82 in Ru3.1-31 (PR length: 0.28 ± 0.07 cm; n = 24) and 9.42 ± 4.84 in Kidr-1 (PR length: 1.90 ± 0.50 cm; n = 24). Hence, reduced LR number in Ru3.1-31 compared to Kidr-1 was likely caused by its reduced PR length. Some accessions, such as Leo-1 and Voeran-1 displayed contrasting phenotypes as regards their LR number (7.07 ± 1.41 and 3.14 ± 1.48 LRs, respectively; n = 29) although they displayed similar PR lengths (Figures 2A,C). On the other hand, Leo-1 and Aitba-2 displayed similar LR number which were larger in Leo-1 likely due to its earlier LR emergence onset (0.55 ± 0.57 days in Leo-1 and 1.82 ± 0.50 days in Aitba-2; n = 29; Figures 2B,C). Ped-0 displayed very short PRs while their wound-induced LRs were much longer ( Figure 2C). As regards LR density, Castelfed4.2 and Leo-1 displayed extreme phenotypes, with 2.98 ± 1.35 and 8.05 ± 2.37 roots/cm (n = 29), respectively.
Assessment of Population Structure
The observed phenotypic distribution for the studied traits (PR length, LR emergence onset, LR number and LR density) suggested that these traits were controlled by multiple genes, that some of the causal alleles are pleiotropic (i.e., affect several of these traits), and that the studied population (n = 120 accessions) was polymorphic for those causal alleles. We determined the genetic relationship among the studied accessions using a Structure analysis with 319 genome-wide randomly selected and synonymous (likely evolutionary neutral) SNP markers already available (see section "Materials and Methods"). Structure analysis of these accessions identified two distinct genetic groups ( Figure 3A) that closely correspond to their geographic regions of origin ( Figure 3B): The so-called "West" subpopulation including 101 accessions, and the "East" subpopulation with the remaining 19 accessions. However, a detailed analysis of these results indicated a continuous genetic shift from "East" to "West" accessions that follow A. thaliana geographical distribution and that likely arose by local haplotypes, as it has been previously proposed (Platt et al., 2010).
We found a significant variation range for the studied traits between these two genetically distinct subpopulations ( Figure 3C). Altogether, accessions belonging to the "East" subpopulation displayed longer PRs and an increasing number of LRs as regards the "West" subpopulation. However, some accessions of the "East" subpopulation, such as Shigu-1, displayed lower phenotypic values for the studied traits than most of their relatives (Figure 4). On the other hand, accessions of the "West" subpopulation and highly genetically divergent from those in the "East" subpopulation (i.e., Aitba-2, HKT2-4, Leo-1, Mrk-0, and Pra-6) displayed higher number of LRs compared with their closest relatives (Figure 4). Despite some population structure among the studied lines and due to high heritability estimates, there is potential for the identification of natural alleles affecting wound-induced LR formation responses through GWA mapping with our dataset.
Genome-Wide Association Mapping of Wound-Induced LR Formation
To obtain insight into the genetic basis of the observed variation in wound-induced responses in young A. thaliana roots, we performed GWA mapping (see section "Materials and Methods"). The significance of association was first evaluated with three different statistical models (LM, KW and AMM; Supplementary Figures S3A-C) and no significant SNP associations were identified by randomization of the phenotypic values within the studied lines (Supplementary Figure S3D). Although LM, and KW usually include more false positives than AMM, they do not present any risk of overcorrection in P-value when applied to traits correlated with population structure (Filiault and Maloof, 2012). We used a conservative threshold of −log 10 (P-value) > 6.5 and minor allele frequency (MAF) > 12% to select the SNPs being associated with a given trait. A total of 162 SNP associations were found with the LM method for the studied parameters (Supplementary Table S5). We found 32 SNPs associated with PR length with P-values ranging from 1.41 × 10 −10 to 3.04 × 10 −7 . Thirty-two SNPs were significantly associated with LR emergence onset (P-values ranging from 1.22 × 10 −8 to 3.09 × 10 −7 ) and only one SNP was found associated with LR density. The larger number of significantly associated SNPs was found for LR number (n = 114), with P-values ranging from 8.27 × 10 −11 to 3.15 × 10 −7 . Consistently with our previous observation that PR length and LR number are significantly correlated, 11 SNPs were significantly associated with both traits; similarly, six significantly associated SNPs were shared between LR emergence onset and LR number (Supplementary Figure S4A).
Next, we classified the selected SNPs based on its molecular effect (Supplementary Figure S4B). About 36% of the significantly associated SNPs were located in intergenic regions and 17.2% of the SNPs laid in the coding region of the annotated gene causing amino acid changes in the protein (Supplementary Figure S4B). Previous reports have shown that, due to linkage disequilibrium, multiple significantly associated SNPs should be found within a small chromosome region for true associations (Rajarammohan et al., 2018). To reduce the number of selected loci for further studies, we focused FIGURE 4 | Natural variation of LR architecture after root tip excision in 120 accessions of Arabidopsis thaliana. Average ± standard deviation (SD) values of (A) relative PR length, (B) relative LR emergence onset, (C) relative LR number, and (D) relative LR density at 4 dae as compared with the Col-0 reference accession. Light-colored bars indicate accessions belonging to the "East" subpopulation (names indicated in gray). The Col-0 reference is shown in green. Accessions are sorted based on their relative LR numbers. Those accessions with extreme phenotypes (+, maximum; -, minimum) are also indicated.
on 19 candidate genomic regions based on the following criteria (Supplementary Figure S4C): (1) P-value of associated SNPs < 3.16 × 10 −7 (which corresponded to a LOD score > 6.5), (2) presence of multiple significantly associated SNPs within an average of 10 Kpb genomic window, and (3) presence of, at least, one non-synonymous SNP within the selected region. We found five genomic regions putatively contributing to PR length variation in the studied population ( Figure 5A), with one (At1g04260), two (At1g04470), three (At2g22660), one (At4g01090), and two (At4g22920, At4g22940) non-synonymous SNPs each (Supplementary Table S5). Three genomic regions were identified as regards their effect on variation in LR emergence onset (Figure 5B), each with one non-synonymous SNP (At1g72250, At1g72300 and At5g20980, respectively; Supplementary Table S5). We found 14 genomic regions putatively involved in the observed variation in LR number among the studied accessions ( Figure 5C). Interestingly, the non-synonymous SNPs of three of these regions (dubbed as 2 , 4 , and 5 ) overlapped with three genomic regions also selected as being involved in PR length variation ( Supplementary Table S5). Hence, the affected genes in these cases, At1g04470, At4g01090, and At4g22940, might indirectly contribute to the LR number differences likely due to their direct effect on PR length before root tip excision. The remaining regions identified in the FIGURE 5 | Manhattan plots of associations between SNPs and the studied parameters using a linear regression model (LM). (A) Relative PR length, (B) relative LR emergence onset, (C) relative LR number, and (D) relative LR density at 4 dae as compared with the Col-0 reference accession. Dashed horizontal red lines indicate threshold for significance in genome-wide association (GWA) mapping set at -log 10 (P-value) > 6.5. Red dots indicate the position of non-synonymous significantly associated SNPs. Numbers 1-19 indicate the genomic regions considered for further studies. Some statistically significant SNP were found both in PR length and LR number (2 and 2 , 4 and 4 , 5 and 5 ). Table S5) and therefore deserve further studies. On the contrary, no other genomic region fulfilled our selection criteria as regards LR density and this trait was not considered (Figure 5D).
Selection of Candidate Genes Involved in Wound-Induced LR Formation
To identify allelic variation in genes contributing to the observed differences in LR number after root tip excision, we selected 20 non-synonymous SNPs for further studies (Supplementary Table S6). Although SNPs in intergenic regions could also be causative, we decided to focus on non-synonymous polymorphisms as their later characterization can be performed on an easier way using reverse genetics tools. Based on both geographic distribution and genotype (Figure 3), we hypothesized that Nemrut-1 and Yeg-1 might represent atypical accessions due to ancient migration and genetic isolation. To discard the false-positive SNPs of this spurious association, we repeated the GWA mapping by removing either one or both accessions, which allowed us to reduce to 11 the number of significantly associated SNPs contributing to LR number (Supplementary Table S6). Due to the genetic structure of the studied accessions, we performed ANOVA analyses as regards the "East" and "West" subpopulations independently. Polymorphisms at eight SNPs affecting six genes (At1g17700, At3g58220, At4g01090, At4g33530, At5g16220, and At5g19710; Table 1) were found significantly associated with LR number variation (Supplementary Table S6). Four haplotypes were detected for selected SNPs within the At4g01090 gene (CAA, CAT, CTA, and TAT). The accessions containing the TAT haplotype displayed a significant increase in LR number, irrespectively of their subpopulation of origin (Supplementary Figure S5A). Indeed, the T to A polymorphism at Chr4:472726 accounted for the quantitative differences in LR number by its own. In addition, we observed a haplotype-dependent relationship between SNPs at At5g16220, and At5g19710, which are separated by 1.4 Mb and were previously assigned to two different candidate genomic regions ( Figure 5C). The GA haplotype for these two genes corresponded to the higher phenotypic values for LR number (Supplementary Figure S5B). To our knowledge, this is the first example of twolinked quantitative trait nucleotides (QTNs) detected through GWA mapping and further experiments will account for the functional relationship between the two genes and woundinduced LR number.
Experimental Validation of Candidate Genes
To validate the identification of novel genes involved in woundinduced LR formation in A. thaliana seedlings, we chose At4g01090, At4g33530, and At5g19710 for further studies. We searched for available T-DNA insertions in those three genes and identified homozygous mutants by means of PCR and sequencing (Supplementary Table S1). Based on haplotype studies, we found that the Chr4:472726 C/T polymorphism at the At4g01090 gene (Supplementary Figure S6A) was significantly associated with LR number variation, even in those accessions belonging to the same subpopulation such as Fei-0 (5.19 ± 1.88; n = 36) and Star-8 (8.36 ± 2.33; n = 36; Supplementary Figures S6B,C). Additionally, T-DNA homozygotes from the Salk_086312 segregating line displayed a reduced number of wound-induced LR at 4 dae (2.82 ± 1.27; n = 34) in comparison to their wild-type siblings (6.68 ± 1.87; n = 63; Supplementary Figures 6D,E). The homozygous seedlings were also characterized by their longer hypocotyl and shorter PRs (Supplementary Figure 6D). Our results confirmed that the hypothetical protein encoded by the At4g01090 gene participates in wound-induced LR development and that the observed natural variation in their protein sequence might affect their biochemical activity.
At4g33530 (Supplementary Figure S7A) encodes a potassium (K + ) uptake transporter which is highly expressed in root hairs (Ahn et al., 2004). We found a statistically significant and non-synonymous SNP (Chr4:16128906) correlated with woundinduced LR phenotype variation (Supplementary Figure S7B). The accessions Pu2-7 and Aitba-2 differed in their LR number (6.44 ± 1.95; n = 32 and 9.53 ± 1.94; n = 34, respectively) and carried alternative alleles of the Chr4:16128906 marker (Supplementary Figure S7C). We identified T-DNA homozygotes from two Salk insertion lines interrupting the coding region of this gene (Supplementary Figure S7A). None of the studied homozygous mutants from Salk_120707 and Salk_072850 lines displayed significant differences in wound-induced LR number as regards their wild-type siblings (Supplementary Figures S7D,E).
The At5g19710 gene ( Figure 6A) encodes a histidine phosphotransfer protein (AHP) whose function on the CK a Gene expression was obtained from the eFP browser (Winter et al., 2007). N/A: not available. transduction pathway has not yet been elucidated. There are six other known AHPs involved in CK responses (Hutchison et al., 2006). Phylogenetic tree reconstruction of AHPs including At5g19710 (Figure 6A), suggested that the annotated AHP protein encoded by this gene was incorrectly predicted due to an exon skipping, and clustered together with the AHP4 negative regulator of CK signaling ( Figure 6B; Moreira et al., 2013). We found that the Chr5:6665363 G/A polymorphism at the third exon of this gene (Figure 6A) was significantly associated with LR number variation ( Figure 6C), even in those accessions belonging to the same subpopulation such as Ll-0 (3.91 ± 2.20; n = 32) and Pra-6 (7.91 ± 1.91; n = 33; Figure 6D). We identified a homozygous T-DNA insertion line for the At5g19710 gene whose seedlings showed a reduced number of woundinduced LRs (Figures 6E,F) due to a significant miss-regulation of At5g19710 gene expression ( Figure 6G). We also confirmed that the triple ahp1 ahp2 ahp3 mutants, which was defective in CK root responses (Hutchison et al., 2006), displayed a significant reduction in wound-induced LRs at 4 dae compared to their wild-type background (Figures 6E,F). Taken together, our results seem to indicate that altered homeostasis of AHP proteins required for CK signaling interferes with wound-induced LR formation, a statement that requires further investigation. Finally, we wondered whether there was an epistatic interaction between the allelic variants for some of the studied non-synonymous SNP markers (Chr4:472726, Chr4:16128906 and Chr5:6665363, respectively) that contributed to the observed variation in wound-induced LR formation in the studied population. Accessions sharing the CCG haplotype for these three markers displayed the smallest number of wound-induced LRs (Supplementary Figure S8A), while only the accessions with the haplotype containing a single polymorphism in the At4g01090 gene showed a significant increase (LSD; P-value < 0.01) in LR numbers (Supplementary Figure S8A). The individual contribution of the SNP polymorphisms in the other two genes considered, At4g33530 and At5g19710, hardly increased woundinduced LR numbers alone but in combination (CGA haplotype) their effects on wound-induced LR formation were enhanced (Supplementary Figure S8A). Similar interactions were found between the other SNP pairs (TGG and TCA haplotypes).
Interestingly, we identified several accessions in the "West" subpopulation of all the haplotypes correlated with an increase in wound-induced LR numbers, such as Mrk-0 (8.17 ± 2.12; n = 36) and Vie-0 (6.33 ± 1.80; n = 33) (Supplementary Figure S8B). However, all accessions with the TGA haplotype combining the allelic variants that contribute to an increase of woundinduced LR formation belonged to the "East" subpopulation, which suggested that this trait might be ancestral.
DISCUSSION
The spatial configuration of the RSA allows the plant to dynamically respond to changing soil conditions (Koevoets et al., 2016). Root plasticity relies on the integration of systemic and local signals of nutrient and water availability into the core developmental program of the root (Araya et al., 2014;Bao et al., 2014). Periodic fluctuations in auxin response within the vascular region of the PR near the meristem control the patterning of LR founder cell specification (Moreno-Risueno et al., 2010). A novel IBA-to-IAA conversion pathway in the outer LR cap cells creates a local auxin source that contributes to these periodic auxin fluctuations, which in turn are essential for LR pre-patterning (Xuan et al., 2015). Our wound-induced RSA assay is simple and provides an accurate measure of LR capacity, defined as the total number of competent sites for LR formation on a given root (Van Norman et al., 2014). Interestingly, a recent study has demonstrated that A. thaliana PRs might use a specific pathway to activate LR formation when the PR is damaged (Sheng et al., 2017). The authors found that WUSCHEL-related homeobox11 (WOX11), which is involved in de novo regeneration of adventitious roots from leaf explants (Liu et al., 2014), was also required for LR formation in soil conditions, likely upstream of the LATERAL ORGAN BOUNDARIES DOMAIN (LBD) genes required for LR initiation (Goh et al., 2012).
We found a wide variation for the studied wound-induced RSA traits in our GWA mapping population. There was a clear correlation between the number of wound-induced LRs (i.e., LR branching capacity) with PR length at the excision day and with the time of LR emergence after excision. Some accessions, such as Leo-1 and Voeran-1, significantly differed in their LR number because of an earlier initiation of wound-induced LRs in Leo-1, which were also longer. Hence, Leo-1 contained alleles for higher LR branching capacity that positively contributed to an enhanced root system, which might allow survival in harsh environments. It will be interesting to evaluate whether differences in WOX11 expression between these accessions might account for the observed differences in RSA after root tip excision.
A population analysis of the studied accessions inferred two distinct genetic groups that closely corresponded to the geographic regions of A. thaliana native distribution and the proposed postglacial colonization routes in this species (Platt et al., 2010;Cao et al., 2011). Interestingly, the "East" subpopulation included accessions, mainly from Central Asia, with enhanced wound-induced RSA traits. In this case, genetic polymorphism may be strongly correlated with RSA traits because of demographic history, challenging the identification of the causal polymorphisms. Within the "West" subpopulation however, there was a clear longitudinal gradient of genetic polymorphisms, such as the accessions in Central Europe were also genetically close to those in the "East" subpopulation. Additionally, Nemrut-1 and Yeg-1 were included in the "East" subpopulation in our study, which is in agreement with a recently proposed migration route connecting Asia and Africa from the south (Brennan et al., 2014). Although it is well-known that population structure, among other effects, can complicate GWA studies in Arabidopsis (Filiault and Maloof, 2012), we reasoned that the studied traits showed broad phenotypic variation, globally as well as within the two subpopulations, that also exhibited continuous isolation by distance as observed earlier in this species (Platt et al., 2010). For example, some accessions from the "West" subpopulation, like Leo-1 and Pra-6, showed an enhanced root system after wounding, while other genetically and geographically close accessions (Cdm-0 and Qui-0, respectively) displayed a less complex RSA. It is well known that nutrients and other environmental signals in the soil might alter the RSA (Kellermeier et al., 2014). We thus speculate that the observed differences in wound-induced RSA traits might represent local adaptations to distinct ecological niches, and one of the environmental signals that might be involved is osmotic stress. Some accessions from the "East" subpopulation, such as Shigu-1 and Tamm-2, displayed lower RSA values than their close relatives. Hence, these contrasting accessions might be used as parents for QTL identification through conventional linkage association mapping in different soil stress conditions.
Through GWA mapping, we identified 162 SNP associations that significantly accounted for variation in wound-induced RSA traits located at 19 genomic regions, which were defined based primarily by non-synonymous SNPs. As expected by our trait correlation analyses, we found a clear overlap between three genomic regions associated for PR length and LR number (2/2 , 4/4 , and 5/5 ). In all these cases, the causal polymorphism(s) might affect genes with pleiotropic effects on RSA. GWA mapping has facilitated the identification of the molecular variants underlying complex traits in crops, such as heterosis , grain size (Si et al., 2016), or drought resistance (Wang et al., 2016). In all these examples, hundreds of genetic variants were identified and candidate genes were assigned based on prior knowledge. In most cases, the genetic variation affected regulatory regions of candidate genes and functional validation using transgenic approaches were required for the functional validation of these genes.
Our results suggest that non-synonymous variation in the coding region of At4g01090 was significantly associated with wound-induced LR variation. At4g01090 encodes a hypothetical protein (DUF3133) of unknown function, which is expressed at higher levels in the endodermis of the elongation zone of the root and the mature xylem (Winter et al., 2007). Other ortholog genes encoding proteins with the DUF3133 domain are At4g01410, which has been annotated as a late embryogenesis abundant (LEA) protein, and enhanced disease resistance 4 (EDR4), which modulates plant immunity by regulating clathrin heavy chain 2 (CHC2)-mediated vesicle trafficking (Wu et al., 2015). The protein encoded by At4g01410 has been found to interact with EDR4 (Mukhtar et al., 2011). One intriguing possibility is that these DUF3133-containing proteins might also interact with clathrin-coated vesicles during PIN-FORMED (PIN) endocytosis (Dhonukshe et al., 2007;Kitakura et al., 2011), which could then be directly linked to LR initiation (Ditengou et al., 2008). Additional experiments will be performed in our lab to confirm the involvement of DUF3133-containing proteins in clathrinmediated PIN endocytosis during wound-induced LR formation.
At5g19710 encodes a histidine phosphotransfer protein belonging to the AHP bridge components of the His-Asp phosphorelay transduction pathway of CK signaling (Hwang et al., 2012). Five AHPs (AHP1-5) mediate the cytoplasmicto-nuclear transduction of the CK signal by transferring the phosphoryl group from the CK receptors to nuclear type-B (positive) and type-A (negative) Arabidopsis response regulators (ARRs). AHP6 lacks the conserved His residue and negatively interferes with CK response (Moreira et al., 2013), most likely by competing with AHP1-5 for interaction with CK-activated receptors (Mahonen et al., 2006). The AHP protein encoded by the At5g19710 gene resembled AHP4. Based on loss-offunction analysis (Hutchison et al., 2006), a negative role of AHP4 for a subset of CK responses (i.e., LR formation) has also been proposed. Interestingly, At5g19710 is specifically expressed in LR cap cells in a low nitrogen environment while it was significantly downregulated by a short nitrate treatment (Gifford et al., 2007). Despite the local inhibitory role of CKs on LR initiation (Laplaze et al., 2007), CKs are essential components of the systemic signaling network leading to the enhancement of LR formation where nitrate is available (Ruffel et al., 2016). In Arabidopsis, the adaptive root response to nitrate depends on the NRT1.1/NPF6.3 transporter/sensor system (Bouguyon et al., 2016). NRT1.1 represses LR emergence at low nitrate concentration through its auxin transport activity that lowers auxin accumulation in the LR primordia (Bouguyon et al., 2016). An additional layer of regulation of systemic N signaling involves TCP20 (Guan et al., 2014). TCP20 is a transcription factor from the TEOSINTE BRANCHED1, CYCLOIDEA, and PCF (TCP) family that binds the promoters of type-A ARR5 and ARR7 at high nitrate levels and of NRT1.1 at low nitrate only (Guan et al., 2014). We speculate that the AHP protein encoded by At5g19710 might function as a negative regulator of a subset of CK responses in LR cap cells at low nitrate, leading to a net reduction of the number of competent sites for LR formation. Interestingly, cellspecific regulation of a transcriptional circuit including ARF8 and miR167 mediates LR outgrowth in response to nitrogen (Gifford et al., 2007). Additional experiments will be required to establishing a functional link between At5g19710-encoding AHP and the ARF8/miR167 circuit.
We found a statistically significant association between a non-synonymous SNP in the coding region of K + UPTAKE TRANSPORTER5 (KUP5) that changes a Gln to His residue in a conserved transmembrane domain of the protein. Potassium is an essential element in plant growth as it affects osmotic regulation and cell water potential (Lebaudy et al., 2007). The Arabidopsis genome contains multigene families of potassium channels with distinct or redundant functions (Lebaudy et al., 2007), which might explain why the loss-of-function of kUP5 alone did not produce any effect on wound-induced RSA (this work). Consistently, loss-of-function mutations in three KUP family potassium efflux transporters, KUP6, 8 and 2, showed increased auxin responses and enhanced LR formation (Osakabe et al., 2013). As proposed earlier, these KUP transporters might coordinately control potassium homeostasis across root tissues, and the enhanced LR formation in the triple mutants might be caused by a local excess of potassium in the pericycle cells, resulting in enhanced LR formation due to its effect on cell cycle progression (Osakabe et al., 2013). We found an interesting epistatic interaction between the SNP polymorphisms in At4g33530 and At5g19710, which suggest a functional link between potassium uptake and CK signaling. CKs are fairly known to regulate uptake and metabolism of different nutrients: nitrogen, sulfate, phosphate, and iron (Brenner et al., 2012), but the roles of CKs in potassium signaling are poorly understood (Nam et al., 2012).
Through GWA mapping we have identified a number of significant non-synonymous polymorphisms that accounted for some of the variation found in wound-induced RSA. Our results highlighted new regulators of LR formation in Arabidopsis and the further dissection of the developmental mechanisms involved might help to understand the genetic basis of the natural variation of root plasticity.
AUTHOR CONTRIBUTIONS
JP-P was responsible for conceptualization and supervision and provided the funding acquisition. MJ and JP-P were responsible for methodology and performed the formal analysis. MJ, SI, and JP-P were involved in the investigation, writing of the original draft, and review and editing of the manuscript. AP was responsible for software development.
FUNDING
This work was supported by the Ministerio de Economía, Industria y Competitividad (MINECO) of Spain (Grant Nos. AGL2012-33610 and BIO2015-64255-R) and by European Regional Development Fund (ERDF) of the European Commission.
ACKNOWLEDGMENTS
We are especially indebted to Ümit Seren (Gregor Mendel Institute of Molecular Plant Biology, Austria) for sharing relevant data for this project and the two reviewers for their useful suggestions. | 9,500.4 | 2019-03-15T00:00:00.000 | [
"Biology"
] |
Equivariant incidence algebras and equivariant Kazhdan-Lusztig-Stanley theory
We establish a formalism for working with incidence algebras of posets with symmetries, and we develop equivariant Kazhdan-Lusztig-Stanley theory within this formalism. This gives a new way of thinking about the equivariant Kazhdan-Lusztig polynomial and equivariant Z-polynomial of a matroid.
Introduction
The incidence algebra of a locally finite poset was first introduced by Rota, and has proved to be a natural formalism for studying such notions as Möbius inversion [Rot64], generating functions [DRS72], and Kazhdan-Lusztig-Stanley polynomials [Sta92, Section 6].
A special class of Kazhdan-Lusztig-Stanley polynomials that have received a lot of attention recently is that of Kazhdan-Lusztig polynomials of matroids, where the relevant poset is the lattice of flats [EPW16,Pro18]. If a finite group W acts on a matroid M (and therefore on the lattice of flats), one can define the W -equivariant Kazhdan-Lusztig polynomial of M [GPY17]. This is a polynomial whose coefficients are virtual representations of W , and has the property that taking dimensions recovers the ordinary Kazhdan-Lusztig polynomial of M . In the case of the uniform matroid of rank d on n elements, it is actually much easier to describe the S n -equivariant Kazhdan-Lusztig polynomial, which admits a nice description in terms of partitions of n, than it is to describe the non-equivariant Kazhdan-Lusztig polynomial [GPY17, Theorem 3.1].
While the definition of Kazhdan-Lusztig-Stanley polynomials is greatly clarified by the language of incidence algebras, the definition of the equivariant Kazhdan-Lusztig polynomial of a matroid is completely ad hoc and not nearly as elegant. The purpose of this note is to define the equivariant incidence algebra of a poset with a finite group of symmetries, and to show that the basic constructions of Kazhdan-Lusztig-Stanley theory make sense in this more general setting. In the case of a matroid, we show that this approach recovers the same equivariant Kazhdan-Lusztig polynomials that were defined in [GPY17].
• a k-vector space V • a direct product decomposition V = x≤y∈P V xy , with each V xy finite dimensional • an action of W on V compatible with the decomposition.
More concretely, for any σ ∈ W and any x ≤ y ∈ P , we have a linear map ϕ σ xy : V xy → V σ(x)σ(y) , and we require that ϕ e xy = id Vxy and that ϕ σ ′ σ(x)σ(y) • ϕ σ xy = ϕ σ ′ σ xy . Morphisms in C W (P ) are defined to be linear maps that are compatible with both the decomposition and the action. This category admits a monoidal structure, with tensor product given by Let I W (P ) be the Grothendieck ring of C W (P ); we call I W (P ) the equivariant incidence algebra of P with respect to the action of W .
Example 2.1. If W is the trivial group, then I W (P ) is isomorphic to the usual incidence algebra of P with coefficients in Z. That is, it is isomorphic as an abelian group to a direct product of copies of Z, one for each interval in P , and multiplication is given by convolution.
Remark 2.2. If W acts on P and ψ : W ′ → W is a group homomorphism, then ψ induces a functor F ψ : C W (P ) → C W ′ (P ) and a ring homomorphism R ψ : I W (P ) → I W ′ (P ).
We now give a second, more down to earth description of I W (P ). Let VRep(W ) denote the ring of finite dimensional virtual representations of W over the field k. A group homomorphism ψ : W ′ → W induces a ring homomorphism Λ ψ : VRep(W ) → VRep(W ′ ). For any x ∈ P , let W x ⊂ W be the stabilizer of x. We also define W xy := W x ∩ W y and W xyz := W x ∩ W y ∩ W z . Note that, for any x, y ∈ P and σ ∈ W , conjugation by σ gives a group isomorphism which induces a ring isomorphism .
where f xy ∈ VRep(W xy ) and for any σ ∈ W and x ≤ y ∈ P , f xy = Λ ψ σ xy f σ(x)σ(y) . The unit δ ∈ I W (P ) is characterized by the property that δ xx is the 1-dimensional trivial representation of W x for all x ∈ P and δ xy = 0 for all x < y ∈ P . The following proposition describes the product structure on I W (P ) in this representation. Proposition 2.3. For any f, g ∈ I W (P ).
Remark 2.4. It may be surprising to see the fraction |Wxyz| |Wxz| in the statement of Proposition 2.3, since VRep(W xy ) is not a vector space over the rational numbers. We could in fact replace the sum over [x, z] with a sum over one representative of each W xz -orbit in [x, z] and then eliminate the factor of |Wxyz| |Wxz| . Including the fraction in the equation allows us to avoid choosing such representatives.
Remark 2.5. Proposition 2.3 could be taken as the definition of I W (P ). It is not so easy to prove associativity directly from this definition, though it can be done with the help of Mackey's restriction formula (see for example [Bum13, Corollary 32.2]).
Remark 2.6. Suppose that ψ : W ′ → W is a group homomorphism, and for any x, y ∈ P , consider the induced group homomorphism ψ xy : W ′ xy → W xy . For any f ∈ I W (P ), we have, R ψ (f ) xy = Λ ψxy (f xy ). In particular, if W ′ is the trivial group, then R ψ (f ) xy is equal to the dimension of the virtual representation f xy ∈ VRep(W xy ).
Before proving Proposition 2.3, we state the following standard lemma in representation theory.
Lemma 2.7. Suppose that E = s∈S E s is a vector space that decomposes as a direct sum of pieces indexed by a finite set S. Suppose that G acts linearly on E and acts by permutations on S such that, for all s ∈ S and γ ∈ G, γ · E s = E γ·s . For each x ∈ S, let G x ⊂ G denote the stabilizer of s. Then there exists an isomorphism Proof of Proposition 2.3. By linearity, it is sufficient to prove the proposition in the case where we have objects U and V of C W (P ) with f = [U ] and g = [V ]. This means that, for all x ≤ y ≤ z ∈ P , The proposition then follows from Lemma 2.7 by taking E = (U ⊗ V ) xz , S = [x, z], and G = W xz .
Let R be a commutative ring. Given an element f ∈ I W (P )⊗R and a pair of elements x ≤ y ∈ P , we will write f xy to denote the corresponding element of VRep(W xy ) ⊗ R. for all x < z ∈ P . 2 The second condition can be rewritten as and this equation has a unique solution for g. Thus f has a right inverse if and only if f xx ∈ VRep(W x ) ⊗ R is invertible for all x ∈ P . The argument for left inverses is identical, so it remains only to show that left and right inverses coincide.
Let g be right inverse to f . Then g is also left inverse to some function, which we will denote h. We then have so g is left inverse to f , as well.
Equivariant Kazhdan-Lusztig-Stanley theory
In this section we take R to be the ring Z[t] and for each f ∈ I W (P ) ⊗ Z[t] and x ≤ y ∈ P , we write f xy (t) for the corresponding component of f . One can regard f xy (t) as a polynomial whose coefficients are virtual representations of W xy , or equivalently as a graded virtual representation of W xy . We assume that P is equipped with a W -invariant weak rank function in the sense of [Bre99, Section 2]. This is a collection of natural numbers {r xy ∈ N | x ≤ y ∈ P } with the following properties: x ≤ y and σ ∈ W . Note that I W (P ) is a subalgebra of I W (P ), and we define an involution f →f of I W (P ) by puttingf xy (t) := t rxy f xy (t −1 ). An element κ ∈ I W (P ) is called a P -kernel if κ xx (t) = δ xx (t) for all x ∈ P andκ = κ −1 .
Proof. We follow the proof in [Pro18, Theorem 2.2]. We will prove existence and uniqueness of f ; the proof for g is identical. Fix elements x < w ∈ P . Suppose that f yw (t) has been defined for all x < y ≤ w and that the equationf = κf holds where defined. Let The equationf = κf for the interval [x, w] translates tō It is clear that there is at most one polynomial f xw (t) of degree strictly less than r xw /2 satisfying this equation. The existence of such a polynomial is equivalent to the statement To prove this, we observe that This is formally equal to the expression for (κ(κf )) xw − (κf ) xw , which by associativity is equal to the expression for Thus we have Thus there is a unique choice of polynomial f xw (t) consistent with the equationf = κf on the interval [x, w].
We will refer to the element f ∈ I W 1 /2 (P ) from Theorem 3.1 is the right equivariant KLSfunction associated with κ, and to g as the left equivariant KLS-function associated with κ.
For any x ≤ y, we will refer to the graded virtual representations f xy (t) and g xy (t) as (right or left) equivariant KLS-polynomials. When W is the trivial group, these definitions specialize to the ones in [Pro18, Section 2].
Example 3.2. Let ζ ∈ I W (P ) be the element defined by letting ζ xy (t) be the trivial representation of W xy in degree zero for all x ≤ y, and let χ := ζ −1ζ . The function χ is called the equivariant characteristic function of P with respect to the action of W . We have χ −1 =ζ −1 ζ =χ, so χ is a P -kernel. Sinceζ = ζχ, ζ is equal to the left KLS-function associated with χ. However, the right KLS-function f associated with χ is much more interesting! See Propositions 4.1 and 4.3 for a special case of this construction.
We next introduce the equivariant analogue of the material in [Pro18, Section 2.3]. If κ is a P -kernel with right and left KLS-functions f and g, we define Z := gκf ∈ I W (P ), which we call the equivariant Z-function associated with κ. For any x ≤ y, we will refer to the graded virtual representation Z xy (t) as an equivariant Z-polynomial.
Remark 3.4. Suppose that κ ∈ I W (P ) is a P -kernel and f, g, Z ∈ I W (P ) are the associated equivariant KLS-functions and equivariant Z-function. It is immediate from the definitions that, if ψ : W ′ → W is a group homomorphism, then R ψ (f ), R ψ (g), R ψ (Z) ∈ I W ′ (P ) are the equivariant KLS-functions and equivariant Z-function associated with the P -kernel R ψ (κ) ∈ I W ′ (P ). In particular, if we take W ′ to be the trivial group, then Remark 2.6 tells us that the ordinary KLS-polynomials and Z-polynomials are recovered from the equivariant KLS-polynomials and Zpolynomials by sending virtual representations to their dimensions.
Matroids
Let M be a matroid, let L be the lattice of flats of M equipped with the usual weak rank function, and let W be a finite group acting on L. If we then defineQ ∈ I W 1 /2 (L) by puttingQ F G (t) = (−1) r F G Q W F G M F G (t) for all F ≤ G, we immediately obtain the following proposition.
Proposition 4.6. The functions P andQ are mutual inverses in I W (L). | 2,996 | 2020-09-14T00:00:00.000 | [
"Mathematics"
] |
A Novel Type of Wireless V2H System with a Bidirectional Single-Ended Inverter Drive Resonant IPT
Copyright: © 2016 Omori H, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. A Novel Type of Wireless V2H System with a Bidirectional Single-Ended Inverter Drive Resonant IPT Hideki Omori1*, Shinya Ohara1, Masahito Tsuno2, Noriyuki Kimura1, Toshimitsu Morizane1 and Mutuo Nakaoka3 1Department of Electrical and Electronics Systems Engineering, Osaka Institute of Technology, Japan 2Nichicon Co.Ltd., Kyoto Japan 3University of Malaya, Kuala Lumpur, Malaysia
Introduction
In recent years, Electric Vehicles (EV) which are highly efficient as well as do not create air pollution, and offer promise as an effective solution to environmental problems with great advances of power electronic technology.
One of the key issues to their successful and wide diffusion is the provision of adequate battery charging infrastructure. In order to create a battery charging infrastructure by installing equipment in such as locations as carports in private homes, the inductive power transfer (IPT)-wireless battery charging system is indispensable for spread. The wireless battery charging system eliminates the use of power cables with plug. Merely by parking the car in a designated spot, the battery can be charged. It is a promising system for wider diffusion because it is easy and safe to use for a broad range of users including the elderly.
Usually as wireless EV charging power supply topologies have been half-bridge, push-pull, full bridge, boost half-bridge and boost fullbridge circuit configuration which have a plurality of power switching devices [1][2][3][4].
The authors have previously put into practice the cost-effective and high-efficiency single-ended quasi-resonant soft-switching inverter [5]. Although EVs are primarily consider as a method of clean transport, they can also be used in smart house systems to supplement the energy storage. This vehicle to home (V2H) system essentially requires a bidirectional power transfer feature between the EV and home. This paper presents a new system with the simplest components and low cost aiming at wide diffusion for home use. From a practical point of view, simple high-frequency inverter circuit topologies have to be effectively selected in accordance with specific cost effective applications. Proposed is a novel type of wireless EV charging system based on IPT technology with an efficient and compact type singleended quasi-resonant high-frequency inverter. The single-ended inverter, which can operate in the frequency range from 20-30 kHz under a self-excited ZVS control and its zero voltage crossing detector of resonant capacitor voltage is evaluated from an experimental point of view. Furthermore transfer power and efficiency have been successfully improved by pick-up circuit with resonant component. The output power of a proposed system is successfully improved by a resonant IPT circuit. EVs can be used in smart house systems to supplement the energy storage. This vehicle to home (V2H) system essentially requires a bidirectional power transfer feature between the EV and home. This paper presents a new bidirectional inductive power transfer (IPT) system for wireless V2H with simplest components and low cost aiming at wide diffusion for home use also. Proposed is a novel type of bidirectional wireless EV charging system with an efficient and compact type single-ended quasi-resonant high-frequency inverter for V2H. And a result of feasibility study by simulation and experiment is indicated for V2H [6].
System configuration
An IPT-based wireless EV charging system is schematically illustrated in Figure 1. Figure 2 shows a proposed total system with a single-ended quasi-resonant high-frequency inverter. The system is mainly composed of a single-phase diode D 1 rectifier with a L 3 -C 3 filter, a single-ended quasi-resonant high-frequency inverter operating with a ZVS-PFM power regulation scheme in the frequency range of 20-30kHz, resonant capacitor C 1 with the primary coil L 1 which is loosely coupled to the pickup coil L 2 as load side, a single-phase diode D 2 rectifier with a L 4 -C 4 filter connected with the battery bank of EV, and a specific power regulation control circuit due to the self-excited timing signal processing.
Operating principle of a proposed wireless EV charger
The periodic steady-state voltage and current operating waveforms of the single-ended quasi-resonant high-frequency inverter-fed DC-
Abstract
Electric vehicles (EV) offer promise as an effective solution to environmental problems. One of the keys to their successful diffusion is the provision of adequate battery charging infrastructure. In order to create a charging infrastructure by installing equipment in such as locations as carports in private homes, the wireless battery charging system is very suitable. EVs can be used in smart house systems to supplement the energy storage. This vehicle to home (V2H) system essentially requires a bidirectional power transfer feature between the EV and home. This paper presents a new bidirectional inductive power transfer (IPT) system for wireless V2H with simplest components and low cost aiming at wide diffusion for home use. Proposed is a novel type of bidirectional wireless EV charging system with an efficient and compact type single-ended quasi-resonant high-frequency inverter for V2H. D 2 moves to Mode IV shown in Figure 4d, and the secondary voltage v L2 changes to V C4 from -V C4 . The active switch Q 1 is turned off in accordance with its gate pulse duration time Ton Judging from the operating voltage v Q1 and current i Q1 of the active switch, it is understood that the switch Q 1 can achieve a soft -switching turn-on transition with ZVS. Note that the diode D 2 in the secondary side can operate under a principle of ZCS (Zero Current Switching). As a result, the recovery current of D 2 is considerably small, and switching power loss of D 2 as well as switching noise can be minimized effectively. The active switch Q 1 can achieve the complete soft switching transitions with ZVS at turn-on and turn-off. The synchronized PWM oscillator generates pulses synchronized with v L1 to achieve ZVS turn-on.
This system can supply an approximately constant current to battery owing to leakage inductance of wireless coupling planar coils without sensing any signals from the secondary-side pick-up coil circuit in EV. Figure 2 are illustrated in Figure 3, which include Mode I, II, III, IV, during one switching cycle. Figure 4 illustrates switching-mode equivalent circuits in accordance with on-off operating mode due to the single active switch Q 1 and passive switches in secondary-side diode bridge D 2 . The circuit state in which the active switch Q 1 in the primary-side is cut off and passive switch D 2 in the secondary-side is conducting is defined as Mode I and II (Figure 4a and 4b).
DC converter in
In Mode I (Figure 4a), when the active switch Q 1 is turned off at t 0 , LC resonant tank circuit can operate. The inductor current and capacitor voltage become resonant state. Because of resonant operation, the voltage across the switch Q 1 begins to increase sinusoidally from zero voltage. The active switch Q 1 is turned off with ZVS (Zero Volt Switching) transition. The inductor current iL 1 , iL 2 through the primary coil L 1 , the secondary coil L 2 are decreasing. As soon as the inductor current iL 2 reaches zero at t 1 , the state of D 2 moves to Mode II shown in Figure 4b, the inductor voltage v L2 across the secondary coil L 2 becomes -V C4 from V C4 .
As soon as the resonant capacitor voltage reaches the supply DC voltage V C3 at t 2 , the voltage across Q 1 becomes zero. At this point, the antiparallel diode of Q 1 turns on naturally. The operating mode becomes Mode III shown in Figure 4c. The circuit state in which the active switch Q 1 in the primary-side and passive switch D 2 in the secondary-side are both conducting is defined as Mode III, IV (Figure 4c and 4d). The inductor current iL 1 through the primary coil L 1 and the active switch current iQ 1 increase in a function time as illustrated in Mode III, IV of Figure 3 during Ton period.
When the secondly current iL 2 reaches to zero at t 3 , the state of M: mutual inductance between L 1 and L 2 , which is influenced upon gap distance variable. 1 2 M k L L = Output power of the system shown in Figure 2 is approximately represented by the following Equation: Where, V L1 ,V L2 : effective voltages of the power feeding coil and the power receiving coil. R 0 : equivalent resistance of the output circuit.
ω : operating frequency of the inverter.
On the other hand, output power P r in the case of a power receiving coil L 2 with a parallel connected resonant capacitor C 2 is approximately represented as following equation: P r can be compared to P nr as follows: The resonant transfer power P r is higher than the non-resonant transfer power P nr under the condition of, then proposed is a resonant wireless EV charger with the pick-up side resonant capacitor in Figure 2 system [7].
Wireless coupling coil units
It is noted that the primary-side power feeding unit L 1 is loosely coupled to the pick-up coil L 2 of the secondary side power receiving unit for the battery charging power supply. In actual, these circular coils in the primary and secondary-side are assembled by the power litz wires in order to reduce power losses due to the skin effect. Two contactless planar and circular coil units with ferrite core sheets; power sending coil and power receiving coil are depicted in Figure 5.
We propose a power transfer configuration through a rear glass, because it is easy to install the pick-up unit. Then the diameter of coils is Transfer-power improvement by a resonant pick-up Circuit equations of two planar spiral coils are given by Where, L 1 , L 2 : self-inductance of the primary-side power feeding coil and secondary side power receiving coil. As a matter of fact, some circuit parameters; mutual inductance M and magnetic coupling coefficient k of two planar spiral coils can depend upon the gap distance variable. Primary inductance L 1 and secondary inductance L 2 is not depending on gap ( Figure 6).
Operating characteristics
The measured waveforms in a proposed EV charging system are shown in Figure 7. The active switch Q 1 turn-on and turn-off by ZVS, and the self-excited ZVS control scheme well functions. In particular, in single-ended quasi-resonant inverter, note that the maximum voltage applied to the active switch Q 1 becomes relatively high because of quasiresonant operation of primary side.
The measured value of output power in non-resonant case and resonant case vs. load characteristics for gap distance from 30 to 50 mm is shown in Figure 8. The output power with resonant pickup circuit is 1.5-2 times higher than that with non-resonant one. Measured power transfer efficiency is 90% under 30 mm gap in the resonant pick-up case.
A prototype of EV equipped with pick-up coil unit of the resonant IPT-wireless charging system and power feeding coil unit is shown in Figure 9a. The wireless charging system can charge 4.5 kWh in 5 hours as a feasibility study result as shown in Figure 9b.
A Wireless V2H System With Bidirectional Resonant Single-Ended Inverter
A smart house with a vehicle to home (V2H) system is shown in Figure 10. Connected EV plays a role to supplement a storage battery system. As it is easy to connect EV the house by a wireless vehicle to home system, the EV is efficiently used for the smart house. The vehicle to home (V2H) system essentially requires a bidirectional power transfer feature between the EV and home. A new system configuration by a single-ended inverter with a resonant IPT circuit for bidirectional power transfer is shown in Figure 11. The proposed system is with simplest components and low cost.
In the EV charging mode, a home-side single-ended inverter produces high frequency power which is transferred to vehicle side circuit by resonant IPT as shown in Figure 12 left side. Capacitance C 1 in home-side circuit operates as resonant capacitance for ZVS transition and capacitance C 2 in vehicle-side circuit operates as resonant capacitance for resonant IPT. Input current I L3 smoothed by DC choke L 3 is constant without ripple. Also output current I L4 smoothed by DC choke L 4 is constant without ripple (Figure 12). On the other hand, in the EV battery energy using mode, a vehicle-side single-ended inverter produces high frequency power which is transferred to home side circuit by resonant IPT indicated in Figure 12 right side. Capacitance C 2 in vehicle-side circuit operates as resonant capacitance for ZVS transition and capacitance C 1 in homeside circuit operates as resonant capacitance in power receiving for resonant IPT. Figure 13 shows simulated operating waveforms of a proposed bidirectional circuit using a quasi-resonant high-frequency inverter in home to vehicle mode. The active switch SW1 is turned off and turned on with ZVS transition. And the active switch SW2 is kept off. A vehicle side circuit operates as a rectifier with a pick-up resonant circuit. This system operates as a resonant forward converter. Figure 14 shows operating modes of the bidirectional single-ended converter with resonant IPT in home to vehicle mode. Figure 15 shows measured operating waveforms of proposed wireless V2H system. These waveforms are same as above mentioned simulation waveforms. The ZVS control scheme works well. Measured waveforms of vehicle to home mode also show the reverse operation scheme well function. Figure 16a shows output power characteristics of home to vehicle mode. The output power decreases depending on gap distance, and can be controlled by conduction time TON. Figure 16b shows the output power characteristics in home to vehicle power transfer mode which same as those in vehicle to home mode.
Conclusion
Presented has been a new wireless EV charging system based on IPT technology with minimum components and low cost aiming at wide diffusion for home use. From a practical point of view, an efficient and compact type single-ended quasi-resonant high-frequency inverter was selected. The single-ended inverter which can operate in the frequency range from 20-30 kHz under a synchronized self-excited ZVS control and its zero voltage crossing detector of resonant capacitor voltage is evaluated from an experimental point of view. Output power of a proposed system has been successfully improved by a resonant IPT circuit. The output power with resonant pick-up circuit was 1.5-2 times higher than that with non-resonant one. Furthermore proposed has been a new resonant IPT wireless V2H system with a simplest and low cost bidirectional single-ended converter aiming at wide diffusion for home use also. And a result of feasibility study for V2H by simulation and experiment was indicated. | 3,477.6 | 2016-09-26T00:00:00.000 | [
"Engineering"
] |
A Shamanskii-Like Accelerated Scheme for Nonlinear Systems of Equations
Newton-type methods with diagonal update to the Jacobian matrix are regarded as one most efficient and low memory scheme for system of nonlinear equations. One of the main advantages of these methods is solving nonlinear system of equations having singular Fréchet derivative at the root. In this chapter, we present a Jacobian approximation to the Shamanskii method, to obtain a convergent and accelerated scheme for systems of nonlinear equations. Precisely, we will focus on the efficiency of our proposed method and compare the performance with other existing methods. Numerical examples illustrate the efficiency and the theoretical analysis of the proposed methods.
Introduction
A large aspect of scientific and management problems is often formulated by obtaining the values of x of which the function evaluation of that variable is equal to zero [1]. The above description can be represented mathematically by the following system of nonlinear equations: ⋮⋮⋮ ¼ ⋮ f n x 1 ; x 2 ; …; x n ðÞ ¼ 0 where x 1 ,x 2 , …,x n ∈ R n are vectors and f i is nonlinear functions for i ¼ 1, 2, …,n. The above system of equations (1) can be written as where F : R n ! R n is continuously differentiable in an open neighborhood of the solution x * . These systems are seen as natural description of observed phenomenon of numerous real-life problems whose solutions are seen as an important goal in mathematical study. Recently, this area has been studied extensively [2,3]. The most powerful techniques for handling nonlinear systems of equations are to linearize the equations and proceed to iterate on the linearized set of equations until an accurate solution is obtained [4]. This can be achieved by obtaining the derivative or gradient of the equations. Various scholars stress that the derivatives should be obtained analytically rather than using numerical approach. However, this is usually not always convenient and, in most cases, not even possible as equations may be generated simply by a computer algorithm [2]. For one variable problem, system of nonlinear equation defined in (2) represents a function F : R ! R where f is continuous in the interval f ∈ a; b ½ . Definition 1: Consider a system of equations f 1 ,f 2 , …,f n ; the solution of this system in one variable, two variables, and n variable is referred to as a point a 1 ; a 2 ; …; a n ðÞ ∈ R n such that f 1 a 1 ; a 2 ; …; a n ðÞ ¼ f 2 a 1 ; a 2 ; …; a n ðÞ ¼ … ¼ f n a 1 ; a 2 ; …; a n ðÞ ¼ 0.
In general, the problem to be considered is that for some function fx ðÞ , one wishes to evaluate the derivative at some points x, i.e., Given fx ðÞ , Evaluate; deriv ¼ df dx This often used to represent an instantaneous change of the function at some given points [5]. The Taylor's series expansion of the function fx ðÞabout point x 0 is an ideal starting point for this discussion [1].
Definition 3: Let f be a differentiable function; the Taylor's fx ðÞaround a point a is the infinite sum: However, continuous differentiable vector valued function does not satisfy the mean value theorem (MVT), an essential tool in calculus [6]. Hence, academics suggested the use of the following theorem to replace the mean valued theorem.
Most of the algorithms employ for obtaining the solution of Eq. (1) centered on approximating the Jacobian matrix which often provides a linear map Tx ðÞ: R n ! R n defined by Eq. (3) q . If all the given component functions ðÞ are continuous, then we say the function F is differentiable.
The most famous method for solving nonlinear systems of equations Fx ðÞ¼0is the Newton method which generates a sequence x k fg from any given initial point x 0 via the following: where F 0 x k ðÞ is the Jacobian for Fx k ðÞ . The above sequence Eq. (5) is said to converge quadratically to the solution x * if x 0 is sufficiently near the solution point and the Jacobian F 0 x k ðÞ is nonsingular [7,8]. This convergent rate makes the method outstanding among other numerical methods. However, Jacobian evaluation and solving the linear system for the step sx n ðÞ ¼ À F 0 x n ðÞ À1 Fx n ðÞ are expensive and time-consuming [9]. This led to the study of different variants of Newton methods for systems of nonlinear equations. One of the simplest and low-cost variants of the Newton method that almost entirely evades derivate evaluation at every iteration is the chord method. This scheme computes the Jacobian matrix F 0 x 0 ðÞ once throughout the iteration process for finite dimensional problem as presented in Eq. (6), The rate of convergence is linear and improves as the initial point gets better. Suppose x 0 is sufficiently chosen near solution point x * and Fx * ðÞ is nonsingular; then, for some K c > 0, we have The convergence theorems and proof of Eq. (7) can be referred to [9,10]. Motivated by the excellent convergence of Newton method and low cost of Jacobian evaluation of chord method, a method due originally to Shamanskii [11,12] that lies between Newton method and chord method was proposed and has been analyzed in Kelly [9,[13][14][15]. Other variants of Newton methods with different Jacobian approximation schemes include [9,14,[16][17][18]. However, most of these methods require the computation and storage of the full or approximate Jacobian, which become very difficult and time-consuming as the dimension of systems increases [10,19].
It would be worthwhile to construct a derivative-free approach and analyze with existing techniques [20][21][22]. The aim of this work is to derive a diagonal matrix for the approximate Jacobian of Shamanskii method by means of variational techniques. The expectation would be to reduce computational cost, storage, and CPU time of evaluating any problem. The proposed method works efficiently by combining the good convergence rate of Shamanskii method and the derivate free approach employed, and the results are very encouraging. The next section presents the Shamanskii method for nonlinear systems of equations.
Shamanskii method
It is known that the Newton method defined in Eq. (2) converges quadratically to x * when the initial guess is sufficiently close to the root [7,10,19]. The major concern about this method is the evaluation and storage of the Jacobian matrix at every iteration [23]. A scheme that almost completely overcomes this is the chord method. This method factored the Jacobian matrix only once in the case of a finite dimensional problem, thereby reducing the evaluation cost of each iteration as in Eq. (3) and thereby degrading the convergence rate to linear [10].
Motivated by this, a method due originally to Shamanskii [11] was developed and analyzed by [7,13,14,16,24]. Starting with an initial approximation x 0 , this method uses the multiple pseudo-Newton approach as described below: after little simplification, we have This method converges superlinearly with q-order of at least t þ 1 when the initial approximation x 0 is sufficiently chosen near the solution point x * and F 0 x * ðÞ is nonsingular. This implies that there exists K s > 0, such that Combining Eq. (7) and the quadratic convergence of Newton method produces the convergence rate of the Shamanskii method as in Eq. (8). Thus, the balance is between the reduced evaluation cost of Fréchet derivative and Jacobian computation for Shamanskii method and Newton method rapid convergence. This low-cost derivative evaluation as well as the rapid convergence rate of several methods including the Shamanskii method has been studied and analyzed in [13,15]. From the analysis, the researchers conclude that that Shamanskii method has shown superior performance compared to Newton method in terms of efficiency whenever work is measured in terms of function evaluations [9]. Also, if the value of t is sufficiently chosen, then, as the dimension increases, the performance of the Shamanskii method improves and thus reduces the limit of complexity of factoring the approximate Jacobian for two pseudo-Newton iterations [14]. Please refer to [15] for the proof of the convergence theorem described below.
Theorem 2 [15]: Let F : D ⊂ R n ! R n conform hypotheses H1 2 ðÞ , H2, and H3. Then the solution point x * is a point of attraction of the Shamanskii iteration, i.e., Eq. (10), and this method possesses at least cubic order of convergence.
Diagonal updating scheme for solving nonlinear systems
Evaluation or inversion of the Jacobian matrix at every iteration or after few iterations does not seem relevant even though the computational cost has generally been reduced as in Shamanskii method [14,[25][26][27][28]. As a matter of fact, it can be easily shown that by adding a diagonal updating scheme to a method, we would have a new low memory iterative approach which would approximate the Jacobian F 0 x k ðÞ into nonsingular diagonal matrix that can be updated in every iteration [29][30][31]. Indeed, using the Shamanskii procedure, the proposed method avoids the main complexity of the Newton-type methods by reusing the evaluated Jacobian during the iteration process. This is the basic idea of the Shamanskii-like method which is described as follows.
Given an initial approximation x 0 , we compute Eq. (2) to obtain the Jacobian F 0 x k ðÞ and present a diagonal approximation to the Jacobian say D k as follows: Suppose s k ¼ x kþ1 À x k and y k ¼ Fx kþ1 ðÞ À Fx k ðÞ ; by mean value theorem (MVT), we have D kþ1 s k ≈ y k (13) Substituting Eq. (12) in Eq. (13), we have Since D kþ1 is the update of diagonal matrix D k , let us assume D kþ1 satisfy the weak secant equation: which would be used to minimize the deviation between D kþ1 and D k under some norms. The updated formula for D k follows after the theorem below: Theorem 3: Suppose D kþ1 is the update of the diagonal matrix D k and ∆ k ¼ D kþ1 À D k , s k 6 ¼ 0. Consider the problem such that Eq. (15) holds and : kk F denotes the Frobenius norm. From Eq. (16), we have the following solution also regarded as the optimal solution: where μ is the corresponding Lagrangian multiplier. Simplifying Eq. (18), we have and Also, for diagonal matrix D k , the element of the diagonal component is given as This completes the proof. ∎ Now, from the above description of the theorem, we deduce that the best possible diagonal update D kþ1 is as follows: However, for possibly small s k kk and trΩ k , we need to define a condition that would be applied for such cases. To this end, we require that s k kk ≥ s 1 for some chosen small Thus, the proposed accelerated method is described as follows: The performance of this proposed method would be tested on well-known benchmark problems employed by researchers on existing methods. This would be carried out using a designed computer code for its algorithm. The problems could be artificial or real-life problems. The artificial problems check the performance of any algorithm in situation such as point of singularity, function with many solutions, and null space effect as presented in Figures 1-3 [7,32].
While the real-life problems emerge from fields such as chemistry, engineering, management, etc., the real-life problems often contain large data or complex algebraic expression which makes it difficult to solve.
Numerical results
This section demonstrates the proposed method and illustrates its advantages on some benchmark problems with dimensions ranging from 25 to 1,000 variables. These include problems with restrictions such as singular Jacobian or problems with only one point of singularity. To evaluate the performance of the proposed diagonal updating Shamanskii method (DUSM), we employ some tools by Dolan and Moré [33] and compare the performance with two classical Newton-type methods based on the number of iterations and CPU time in seconds. The methods include: 1. The Newton method (NM)
The Shamanskii method (SM)
These tools are used to represent the efficiency, robustness, and numerical comparisons of different algorithms. Suppose there exist n s solvers and n p problems; for each problem p and solver s, they define: Requiring a baseline for comparisons, they compared the performance on problem p by solver s with the best performance by any solver for this problem using the performance ratio: We suppose that parameter r m ≥ r p, s for all p, s is chosen and r p, s ¼ r M if and only if solver s does not solve problem p. The performance of solvers s on any given problem might be of interest, but because we would prefer obtaining the overall assessment of the performance of the solver, then it was defined as p s t ðÞ¼ 1 n p size p ∈ P : r p, s ≤ t ÈÉ : Thus, p s t ðÞwas the probability for solver s ∈ S that a performance ratio r p, s was within a factor t ∈ R of the best possible ratio. Then, function p s was the cumulative distribution function for the performance ratio. The performance profile p s : R ! 0; 1 ½ for a solver was nondecreasing, piecewise, and continuous from right. The value of p s 1 ðÞis the probability that the solver will win over the rest of the solvers. In general, a solver with high value of p τ ðÞor at the top right of the figure is preferable or represents the best solver.
All problems considered in this research are solved using MATLAB (R 2015a) subroutine programming [37]. This was run on an Intel® Core™ i5-2410M CPU @ 2.30 GHz processor, 4GB for RAM memory and Windows 7 Professional operating system. The termination condition is set as and the program has been designed to terminate whenever: • The number of iterations exceeds 500, and no point of x k satisfies the termination condition.
• The CPU time in seconds reaches 500.
• Insufficient memory to initiate the run.
At the point of failure due to any of the above conditions as in the tabulated results, it is assumed the number of iteration and CPU time is zero and thus that point has been denoted by " * ." The following are the details of the standard test problems, the initial points used, and the exact solutions for systems of nonlinear equations.
Problem 1 [31]: System of n nonlinear equations x 0 ¼ 1:1; 11:1; …; 1:1 ðÞ Table 1 shows the number of iterations (NI) and CPU time for Newton method (NM), Shamanskii method (SM), and the proposed diagonal updating method (DUSM), respectively. The performance of these methods was analyzed via storage locations and execution time. It can be observed that the proposed DUSM was able to solve the test problems perfectly, while NM and SM fail at some points due to the matrix being singular to working precision. This shows that the diagonal scheme employed has provided an option in the case of singularity, thereby reducing the computational cost of the classical Newton-type methods.
Conclusion
This chapter proposes a diagonal updating formula for systems of nonlinear equations which attributes to reduction in Jacobian evaluation cost. By computational experiments, we reach the conclusion that the proposed scheme is reliable and efficient and reduces Jacobian computational cost during the iteration process. Meanwhile, the proposed scheme is superior compared to the result of the classical and existing numerical methods for solving systems of equations.
Author details
Ibrahim Mohammed Sulaiman*, Mustafa Mamat and Umar Audu Omesa Universiti Sultan Zainal Abidin, Kuala Terengganu, Malaysia *Address all correspondence to<EMAIL_ADDRESS>© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | 3,789.8 | 2020-05-13T00:00:00.000 | [
"Mathematics"
] |
Suppression of Methanol and Formate Crossover through Sulfanilic‐Functionalized Holey Graphene as Proton Exchange Membranes
Abstract Proton exchange membranes with high proton conductivity and low crossover of fuel molecules are required to realize advanced fuel‐cell technology. The selective transportation of protons, which occurs by blocking the transportation of fuel molecules across a proton exchange membrane, is crucial to suppress crossover while maintaining a high proton conductivity. In this study, a simple yet powerful method is proposed for optimizing the crossover‐conductivity relationship by pasting sulfanilic‐functionalized holey graphenes onto a Nafion membrane. The results show that the sulfanilic‐functionalized holey graphenes supported by the membrane suppresses the crossover by 89% in methanol and 80% in formate compared with that in the self‐assembled Nafion membrane; an ≈60% reduction is observed in the proton conductivity. This method exhibits the potential for application in advanced fuel cells that use methanol and formic acid as chemical fuels to achieve high energy efficiency.
Introduction
Recycling carbon dioxide (CO 2 ) and utilizing fuel cells are crucial to achieve carbon neutrality for a sustainable society. [1] promising technology to reduce CO 2 emissions is the electrochemical CO 2 reduction reaction that uses renewable energy to produce energy carriers, including methanol and formic acid.These energy carriers can be directly used as chemical fuels for fuel cells, without requiring dehydrogenation processes. [2,3]−8] A high proton conductivity and low crossover of fuel molecules are required in these fuel cells.Nafion is the standard membrane used in fuel cells because it exhibits excellent proton conductivity (proton conductivity: 10−100 mS cm −1 and areal proton conductivity: 0.2−3.3−10] However, the Nafion membrane displays a large crossover from the anode to cathode chambers (methanol: 1.0−5.0× 10 −6 cm 2 s −1 ; formic acid: 0.68 × 10 −6 cm 2 s −1 ) that prevents the practical use of DMFC and DFAFC. [11]The crossover of fuel molecules in a full-cell system is caused by electro-osmosis and pressure/concentration gradients between the cathode and anode chambers. [12,13]−16] A simple way to suppress the crossover is to increase the membrane thickness; however, this approach has two drawbacks.First, increasing the membrane thickness inevitably increases the size of the fuel-cell units and associated material costs.Second, the internal resistance to proton transport through the membrane increases with increasing membrane thickness, which decreases energy efficiency. [7,16,17]he former may be mitigated by improving the selectivity of the membranes to molecules and ions in the cell, [12,18] by installing metal-organic frameworks (MOFs), [19] graphene, [2,20−24] and other 2D nanomaterials [10,12] on the surface of the Nafion membrane.In particular, graphene sheets have been reported to block large molecules from penetrating the graphene lattice while allowing protons to penetrate. [10,21,25,26]−28] However, such membranes failed to meet the minimum requirement of proton conductivity in the fuel cells (i.e., 10% proton conductivity, ≈0.1 S cm −1 or 1000 mS cm −2 , of Nafion membranes). [29]To date, attempts to maintain a high proton conductivity have been unsuccessful, and advanced fuel cells have not been used practically.A balance is required between the crossover of fuel molecules and proton conductivity.
In this study, we achieved an excellent balance between the crossover of fuel molecules and proton conductivity by fabricating holey graphene with physical and chemical modifications.The incorporation of 5−20 nm holes on graphene markedly increased the proton conductivity, whereas the introduced functional groups on the graphene suppressed the crossover of fuel molecules.The membrane fabricated with the sulfanilic functional group suppressed the crossover by >80% with a slightly reduced proton conductivity compared with the Nafion membranes.It is acknowledged that sulfanilic groups suppress the crossover but increase the proton conductivity. [2,22]oreover, density functional theory (DFT) calculations indicated that graphene comprising sulfanilic functional groups increased the energy barrier of the transport of methanol and formic acid and provided a hopping conduction pathway for protons through the membranes.This graphene-based technology is a simple yet powerful method for the rational design of fuel membranes.
Membrane Characterization
A high-quality single graphene layer (1GL) and a single holey graphene layer (1hGL) were synthesized on a Cu sheet with the chemical vapor deposition (CVD) method with/without Si nanoparticles (≈20 nm in diameter) on the Cu sheet before the CVD process (Figures S1-S3, Supporting Information). [30]For the single-functional graphene layer (1fGL), we chemically functionalized 1hGL with sulfanilic acids to introduce benzenesulfonic acid groups (─C 6 H 4 ─SO 3 H) on the edges of the holes (see the Experimental Section for details).
The transmission electron microscopy (TEM) images indicate that the holes on 1hGL and 1fGL are 5-20 nm in diameter (Figure 1a; Figures S2 and S4, Supporting Information).The high-resolution TEM (HRTEM) images of the 1fGL show that a 0.5 nm width fringe is present around the hole; by contrast, the other graphene regions are highly crystalline (Figure 1b).The corresponding electron diffraction patterns show sharp diffraction spots, indicating that 1hGL and 1fGL are highly crystalline (Figure 1b; Figure S2, Supporting Information).The dark-field scanning TEM (DF-STEM) images indicate that the area ratio of the hole and non-hole area are ≈12.0 ± 1.5% on average (Figure S4, Supporting Information).The elemental distributions of the S and O atoms in 1fGL were confirmed using in situ energy dispersive X-ray spectroscopy (EDS).The concentrations of S and O in the benzenesulfonic acid groups are qualitatively twice as high in the near-hole regions compared with those in the plane regions (Figure 1c-e).
The presence of benzenesulfonic acid groups in 1fGL was further confirmed using Fourier transform infrared spectroscopy (FT-IR) and X-ray photoelectron spectroscopy (XPS).The FT-IR spectra of the fGL samples show fingerprints of the sulfanilicfunctionalized group at 1058 and 1093 cm −1 (─SO 3 Na), [22,31] which are not observed in the spectra of the hGL samples (Figure 2a).Note that ─SO 3 H groups are observed as ─SO 3 Na groups in the dried state.The XPS S 2p spectra of the 1fGL confirm the chemical binding state of the sulfanilic-functionalized group on graphene (169.2 and 168.0 eV). [32]The atomic concentration of the S atoms is ≈2.49at.% on the 1fGL samples (Figure 2b; Figure S5, Supporting Information).
The spatial Raman mapping images of a graphene layer (1GL, 1hGL, 1fGL) sandwiched by two spin-coated Nafion membranes on a window-attached Si 3 N 4 chip (window size: 20 × 20 μm) were examined to study the relationship between the functionalization and level of defects.The Nafion/graphene/Nafionsandwiched membranes have ≈2 μm thickness with 80 nm maximum roughness and 4% roughness ratio (Figures S6-S8
Design Principles of Fabricated Membranes
We investigated the penetration of protons and fuel molecules (methanol and formic acid) through various graphene membranes (Figures S11-S16, Supporting Information).For the electrochemical measurements, the N/xGL/N, N/xhGL/, and N/xfGL/N (x refers to the number of graphene layer) placed on the open-windowed Si 3 N 4 chip to serve both as proton exchange and separating membrane in an H-type cell. [25]The Si 3 N 4 chip was reinforced with a polyethylene terephthalate (PET) sheet to fix it in the H-type cell to physically isolate the working and counter electrode chambers of the H-type cell (Figure S11, Supporting Information).To investigate proton penetration, we immersed the graphene-suspended membrane device in a sulfuric acid solution (0.05 m) and performed electrochemical tests using a two-electrode method with Pt mesh electrodes.Figure 3a shows an example of the current-voltage (I-V) characteristics measured in the devices using the N/xGL/N, N/xhGL/N, and N/xfGL/N.The measured proton current, I, varies linearly with the bias voltage, V.The conductance S ( = I/V) divided by the effective membrane area (A) is discussed below as the areal proton conductivity ( = S/A). [10,25,33]The crossover rate of the fuel molecules was calculated from the quantity of fuel molecules that crossed over from the working electrode chamber containing methanol or formate (formic acid exists as formate in acid) to the counter electrode chamber using chronoamperometry (CA) (Figure 3b,c; Figures S14-S16, Supporting Information).
To improve the areal proton conductivity, we created holes with radii of 5-20 nm on 1GL (Figure S1, Supporting Information).As expected, the areal proton conductivity of N/1hGL/N improves markedly from 0.135 to 1060 mS cm −1 , which is comparable with that of the N/N.Surprisingly, the crossover rates of methanol and formate are 0.715 and 0.163 mol m −2 s −1 , respectively, and are comparable with those of N/1GL/N; they are markedly improved compared with those of the N/N (methanol: 1.060 mol m −2 s −1 and formate: 0.261 mol m −2 s −1 ).These results indicate that the holes on the graphene sheet enable the selective transport of the molecules and protons.
To further optimize the performance of the membrane for fuel cells, we functionalized holey graphene with sulfanilic groups.Figure 3c summarizes the areal proton conductivities and crossover rates of methanol and formate through the various graphene membranes.The areal proton conductivity of N/1fGL/N is 1500 mS cm −2 , which is higher than that of N/1hGL/N (1060 mS cm −1 ); the crossover for methanol and formate is 0.573 and 0.163 mol m −2 s −1 , respectively; this result is comparable with that obtained for N/1hGL/N (methanol: 0.715 mol m −2 s −1 and formate: 0.163 mol m −2 s −1 ).Therefore, the sulfanilic functional groups on 1hGL (i.e., 1fGL) maintain 77.7% proton conductivity and suppress the crossover rates for methanol and formate by 45.9% and 37.5%, respectively, compared with those of the N/N (Tables S1 and S2, Supporting Information).
After recognizing that the N/1fGL/N showed a marked improvement in the membrane performance compared with the performance of N/1GL/N and N/1hGL/N, we optimized the number of graphene layers using the same experimental conditions.Among the tri-layer graphenes, N/3fGL/N displays the highest areal proton conductivity compared with those of N/3GL/N (0.016 mS cm −2 ), N/3hGL/N (509 mS cm −2 ), and N/3fGL/N (818 mS cm −2 ); the areal proton conductivity is approximately half that of the N/N (Table S1, Supporting Information).N/3fGL/N exhibits lower methanol and formate crossover rates (0.120 and 0.052 mol m −2 s −1 , respectively) than those of N/3hGL/N; the crossover rate of N/3fGL/N is comparable with that of N/3GL/N (Figure 3c; Tables S1 and S2, Supporting Information).Notably, N/3fGL/N maintains 42.3% of the proton conductivity of the N/N.N/3fGL/N suppresses the methanol and formate crossover rates by 88.7% and 80.0%, respectively, compared with that of the N/N; N/3fGL/N exhibits an extremely low crossover of fuel molecules and almost meets the required target proton conductivity (≈1000 mS cm −2 ) [10] for fuel cell operations.Notably, N/3fGL/N exhibits an areal proton conductivity of 1196 mS cm −2 at 50 °C (i.e., practical operation temperature), which completely meets the requirement of hydrogen-type fuel cells (Figure 3d,e). [10]−40] Subsequently, we investigated the membrane stability by scanning electron microscope (SEM) and Raman spectroscopy.The cross-sectional SEM images after the crossover tests confirmed that the membranes maintained their dimensions (Figure S17, Supporting Information) and the flat surface structures after the crossover tests (Figure S18, Supporting Information).The Raman mapping after the test indicated that the graphene characters were preserved for the entire membrane area (Figure S19, Supporting Information).These results suggest that graphene layers sandwiched with Nafion membranes are highly stable during the tests.
To establish the impact of the thickness of the Nafion membrane and suppression of crossover by 3fGL, we investigated areal proton conductivity and methanol/formate crossover rates using commercially available Nafion 212 (thickness; 50 μm) and Nafion 117 (thickness; 180 μm) compared with those of our Nafion (N/N, thickness: 2 μm).Notable reductions of the crossover rate (Nafion 212: 0.0425 mol m −2 s −1 for methanol and 0.0146 mol m −2 s −1 for formate; Nafion 117: 0.0184 mol m −2 s −1 for methanol and 0.0015 mol m −2 s −1 ) and proton conductivity (117.2 mS cm −2 for Nafion 212, 88.7 mS cm −2 for Nafion 117 in our experimental setting) are observed with an increase of the thickness of the membrane from 2 to 180 μm (Figure S20 and Tables S1 and S2, Supporting Information).The crossover suppression ratios of Nafion 117 to N/N are 98.3% in methanol and 99.4% in formate and the areal proton conductivity reduction ratio of Nafion 117 to N/N is 95.4%, which are 35% higher than those of the Nafion membranes (N/3fGL/N to N/N).Thus, we confirmed that the increase of membrane thickness markedly reduces the areal proton conductivity while preventing crossover.Next, we examined the performances of N/3fGL-pasted Nafion 212 and Nafion 117 membranes (namely, N/3fGL/Nafion 212 and N/3fGL/Nafion 117).The crossover suppression ratios of N/3fGL/Nafion 212 or 117 to Nafion 212 or 117 are 58.5% in methanol and 58.7% in formate for N/3fGL/Nafion 212 and 71.5% in methanol and 53.2% in formate for N/3fGL/Nafion 117 with 39.2% and 31.2%reduction of the areal proton conductivity, respectively (Tables S1 and S2, Supporting Information).These results indicated that our crossover suppression technology is applicable to commercially available membranes.
Mechanism of the High Selectivity of 3fGL
We experimentally examined the energy barriers for proton penetration as well as methanol and formate penetrations through N/3fGL/N and N/3hGL/N using Arrhenius-type plots in the temperature range of 5 to 50 °C (Figure 3d,e; Figure S21 and Table S5, Supporting Information). [10]The apparent energy barriers of proton penetration at the −1.0 V bias voltage estimated from the Arrhenius-type plots are 0.37 and 0.13 eV for N/3hGL/N and N/3fGL/N, respectively.These results imply that the sulfanilic functional groups reduce the energy barrier for proton penetration.The Arrhenius-type plots of the crossover rates of methanol and formate obtained from 5 h CA measurements are displayed in Figure 3d.The apparent energy barriers of methanol crossover at the −1.6 V bias voltage are 0.30 and 0.36 eV for N/3hGL/N and N/3fGL/N, respectively (Figure 3e).The apparent barrier energies of the formate crossover at the −1.6 V bias voltage are 0.19 and 0.34 eV for N/3hGL/N and N/3fGL/N, respectively (Figure 3e).The apparent energy barrier increases for the crossover of both methanol and formate upon the introduction of the sulfanilic functional groups in the hole regions of the graphene.
We performed DFT calculations to examine the effect of the sulfanilic functional groups (attached to the graphene holes) on the molecular sieving effect.First, we investigated hole size dependence of the proton and fuel molecular penetration energy barriers and confirmed that the energy barriers on a graphene model with a hole ≥1 nm diameter are negligible (Figure S22; Table S6, Supporting Information).To this end, we computed the energy barrier for the process, wherein a proton/methanol/formic acid molecule passes through a nanoconfined space created by the graphene layers (Figure 5a,b; Figure S22-S26 and Tables S6-S8, Supporting Information).The energy barriers for the holey graphene comprising sulfanilic functional groups are 0.21, 0.76, and 0.80 eV for proton, methanol, and formic acid penetrations, respectively; the energy barriers for the holey graphene excluding sulfanilic groups are −0.63 eV, 0.36, and 0.82 eV for proton, methanol, and formic acid penetrations, respectively (Figure 5a-d).Importantly, the positive energy barriers of the methanol and formate models indicate that external energy is required to pass through the graphene layers.However, the negative energy barrier of the protons in the graphene sheets that exclude the sulfanilic groups indicates that protons are adsorbed on the graphene edges.In the presence of sulfanilic functional groups, the protons are easily transferred between the sulfanilic functional groups via the Grotthuss mechanism (Figure 5d). [41]These results confirm that the crossover is suppressed, and proton conductivity is maintained on the sulfanilic-functionalized graphene membranes.
Discussion
We investigated the effect of sulfanilic-functionalized holey graphene on areal proton conductivity and methanol and formic acid crossover rates by combining electrochemical measurements and DFT-calculated energy barriers.The fGL membranes notably suppressed crossover while maintaining high areal proton conductivity.The improvements could be attributed to the following factors: i) introduction of holes into the graphene lattice (i.e., hGL) provides multiple proton penetration pathways between graphenes; ii) sulfanilic-functionalized groups on holey graphene (i.e., fGL) enable the formation of new proton conduction pathways by the transfer of H + ions via the Grotthuss mechanism and accelerate the proton hopping conduction on the sulfanilic functional groups in interplanar graphenes (indeed, hGL without sulfanilic functional groups displayed high crossover rates, implying that the protons in the form of H 3 O + are accompanied by water/fuel molecules during the proton penetration, i.e., water/fuel electroosmotic drag effect); iii) apparent energy barrier of the crossover of both methanol and formate increases upon the introduction of the bulky net-like sulfanilic functional groups on the molecular migration pathway; iv) negatively charged sulfanilic functional groups introduce a steric hindrance and block anionic molecules, thereby inhibiting the migration of methanol and formate, although allowing the migration of small and positively charged protons.The synergistic effects of introducing holes and sulfanilic-functionalized groups into graphene play a crucial role in balancing selective proton transfer and suppressing the crossover of fuel molecules.
Conclusion
In this study, we systematically revealed the penetration mechanisms of protons, methanol, and formate molecules through sulfanilic-functionalized holey graphene.The results indicated that proton conductivity and crossover of fuel molecules could be tuned by introducing nanometer-sized holes in graphene, functionalizing holey graphene with sulfanilic functional groups, and adjusting the number of graphene layers.The crossover suppression and maintenance of high areal proton conductivity were realized, and all the membranes reported previously could be improved by simply pasting the graphene membrane on their surface, thereby reducing the material cost of Nafion membrane and the cell volume.Our findings should contribute to the development of electro synthetic cells for electrochemical CO 2 reduction and advanced fuel cells such as DMFCs and DFAFCs, and can be used for other chemical fuels such as ammonia, urea, and hydrogen peroxide by changing proper functional groups in advanced fuel cells for facilitating a carbon-neutral society.
Experimental Section
Synthesis of Graphene, Holey Graphene, and Functionalized Holey Graphene: 1GL was grown on a Cu foil (99.8%, 25 μm thickness, Alfa Aesar, UK) with a conventional CVD method.The Cu foil was inserted into the center of a quartz tube (30 × 27 × 1000 mm) in a furnace.The tube was heated at 1000 °C for 1 h under an atmosphere of H 2 (100 sccm, 99.99%) and Ar (200 sccm, 99.995%).Thereafter, mono-layer graphene was grown with an additional flow of CH 4 (20 sccm, 99.995%) for 30 min.After the graphene growth, the furnace was immediately opened, and the quartz tube was cooled down to room temperature (25 °C) with a fan.1hGL was synthesized by a similar method as that used for 1GL, but SiO 2 nanoparticles (18-27 nm in diameter, ST-50, 48 wt.%,Nissan Chemical Industries, Japan) were distributed on the Cu foil before the CVD process.A purchased SiO 2 solution was diluted to 1×10 −4 wt.% with ultrapure water and then it was further diluted with 2-propanol (water:2-propanol=1:1 (v/v)) to control the density of holes.Thereafter, 10 μL of the dispersion was added dropwise onto a 1 cm 2 copper foil and dried.Subsequently, 1hGL was obtained under the same CVD conditions as those used for 1GL.For 1fGL, the synthesized 1hGL on the Cu foil was immersed in a mixed solution of 50 mL of sulfanilic acid solution (0.1 m) and 10 mL of NaNO 2 solution (0.01 m), and heated at 70 °C for 12 h.After heating, the blackened solution was replaced with pure water several times for washing and the 1fGL on the Cu foil was dried.The as-synthesized graphene on the Cu foil was chemically etched away in 50 mL of Fe(NO 3 ) 3 (0.25 m) solution and the isolated graphene was manually stacked to prepare graphene membranes (please refer to the Supporting Information for more detail and Figures S1-S6, Supporting Information).
Characterization of Graphene Samples: The morphology and microstructure of the as-synthesized graphene were characterized using a SEM (JEOL JCM-7000) and TEM (JEOL JEM-2100F and JEM-ARM200F-B) with 80 kV accelerating voltage to prevent damage to the graphene, and equipped EDS (SDD Type, Detection surface area 30 mm 2 , Solid angle 0.26 sr).Raman spectra were obtained using a Renishaw InVia Reflex with an incident wavelength of 532.5 nm.Raman measurements were performed using this Si 3 N 4 chip-supported Nafion/graphenes/Nafion membrane.The obtained Raman spectra were normalized by the maximum peak intensity.Fourier transform infrared (FT-IR) spectroscopy (Ther-moFisher (Nicolet iS 50) FT-IR spectrometer) with the wavenumber ranging from 3200 to 600 cm −1 was performed using an attenuated total reflection method.To obtain a clear signal intensity from the FT-IR, the study measured 30 layers stacked 1hGL and 1fGL samples, respectively.Surface chemical states were studied using XPS (AXIS ultra DLD, Shimadzu) with an Al Ka and X-ray monochromator.All samples were transferred on a Si 3 N 4 chip or Cu TEM mesh grid for the measurements.The surface roughness measurements were performed on a thermal oxide Si wafer substrate (< 1 nm roughness, Canosis Co. Ltd, Japan) by AFM (Hitachi, AFM5000II).
Areal Proton Conductivity Measurements: An electrochemical workstation (Biologic, VSP-300) equipped with an H-type cell was used for electrochemical measurements.The schematic of the H-type cell configurations for electrochemical measurements of proton penetration was shown in Figure S11 (Supporting Information).A PET sheet separator with a window-attached Si 3 N 4 chip (window size: 20 × 20 μm) fixed in the center was used to isolate the working electrode and counter electrode chambers filled with H 2 SO 4 (0.05 m) solution.Proton conductivity measurements were performed in a two-electrode system, where both Pt meshes (35 × 25 mm) were used as a working and counter electrode.Note that the proton conductivity was checked using aqueous H 2 SO 4 (0.05 m) and aqueous H 2 SO 4 (0.5 m) solution and there were no obvious differences (Figure S12, Supporting Information).Areal proton conductivity was investigated by measuring the current-voltage (I-V) characteristics with a bias voltage of +200 to −200 mV, applied between two Pt mesh electrodes.The temperature dependence of the areal proton conductivity was measured from −1.0 to +0.2 V after a 5 min waiting time to stabilize the measurement temperature.
Crossover Measurements for Methanol and Formate: A 30% volumetric concentration of methanol or formic acid in the aqueous H 2 SO 4 (0.05 m) solution were used as the electrolytes for the working electrode for chronoamperometry (CA) measurements at −1.6 V using a two-electrode system.After the 5 h CA test, the electrolyte in the counter electrode chamber was collected, and a quantitative analysis of methanol and formate was performed using nuclear magnetic resonance (NMR) and gas chromatography (GC).The crossover rate from the working electrode to the counter electrode chamber was calculated using the following Equation: Crossover rate of fuel molecular where c is the detected molar concentration of methanol or formate (mol/mL), V is the electrolyte volume (mL), A is the membrane area in the chip window (m 2 ), and t is the measurement time (s).Density Functional Theory Calculation: DFT calculations were performed via the CP2K program. [42]The Becke-Lee-Yang-Parr (BLYP) [43] exchange-correlation functional was used.Double-zeta valence plus polarization (DZVP) basis sets were used.The core electrons were described by the Goedecker-Teter-Hutter pseudopotential. [44]The real-space density cut-off was set to 320 Ry.The van der Waals correction was included via Grimme's D3 method. [45]The simulation cell lengths in the x-, y-, and zdirections were 25.56, 24.595, and 50 Å.
The graphene interlayer model was constructed using two overlapping graphene nanoribbons having a width of ≈15 Å.The edge of the graphene nanoribbon was terminated with hydrogen atoms.The sulfanilicfunctionalized graphene interlayer was created by replacing the terminal 8 hydrogen atoms per unit cell with benzenesulfonic acid groups (─C 6 H 4 ─SO 3 H).The nudged elastic band method [46] was used to calculate the energy barriers for the penetration of proton, methanol, and formic acid.
Figure 1 .
Figure 1.Morphological characterizations of sulfanilic-functionalized holey graphene.a) Transmission electron microscopy and b) high-resolution transmission electron microscopy images of hole regions on 1fGL.The inset of (b) shows electron diffraction patterns.Scanning transmission electron microscopy images of 3fGL for c) dark field-scanning transmission electron microscopy, energy dispersive X-ray spectroscopy (EDS) mapping of d) O, and e) S.
Figure 2 .
Figure 2. Structural characterizations of sulfanilic-functionalized holey graphene.a) Fourier transform infrared spectra of 1hGL and 1fGL.b) XPS S 2p spectra for the 1fGL.c) Raman map images of N/1fGL/N on a window-attached Si 3 N 4 chip.The dotted square indicates the window area.d) Raman spectra of N/1fGL/N were collected at the 1, 2, and 3 positions in (c).Scale bar: 5 μm.
Figure 3 .
Figure 3. Electrochemical properties of graphene membranes.a) Current-voltage characteristics of the proton current through various graphene membranes.b) Proton current collected by chronoamperometry (CA) measurements at a cell voltage of −1.6 V. c) Summary of areal proton conductivities and crossover rates of methanol and formate through various graphene membranes.Temperature dependence Arrhenius-type plots of d) areal proton conductivity at −1.0 V, and e) methanol and formate crossover rates for N/3hGL/N and N/3fGL/N at −1.6 V.
Figure 5 .
Figure 5. Computational simulations of crossover of molecules.Interlayer models and energy diagram for proton penetration, methanol, and formic acid penetration through a) bi-layer graphene and b) sulfanilic-functionalized bi-layer graphene.Schematic illustrations of c) fuel molecule and d) proton penetration through the (b) model. | 5,770.8 | 2023-09-08T00:00:00.000 | [
"Materials Science",
"Chemistry",
"Engineering"
] |
HER2/neu overexpression in the development of muscle-invasive transitional cell carcinoma of the bladder
The mortality from transitional cell carcinoma (TCC) of the urinary bladder increases significantly with the progression of superficial or locally invasive disease (pTa/pT1) to detrusor muscle-invasive disease (pT2+). The most common prognostic markers in clinical use are tumour stage and grade, which are subject to considerable intra- and interobserver variation. Polysomy 17 and HER2/neu gene amplification and protein overexpression have been associated with more advanced disease. Standardised techniques of fluorescence in situ hybridisation and immunohistochemistry, which are currently applied to other cancers with a view to offering anti-HER2/neu therapies, were applied to tumour pairs comprising pre- and postinvasive disease from 25 patients undergoing treatment for bladder cancer. In the preinvasive tumours, increased HER2/neu copy number was observed in 76% of cases and increased chromosome 17 copy number in 88% of cases, and in the postinvasive group these values were 92 and 96%, respectively (not significantly different P=0.09 and 0.07, respectively). HER2 gene amplification rates were 8% in both groups. Protein overexpression rates were 76 and 52%, respectively, in the pre- and postinvasive groups (P=0.06). These results suggest that HER2/neu abnormalities occur prior to and persist with the onset of muscle-invasive disease. Gene amplification is uncommon and other molecular mechanisms must account for the high rates of protein overexpression. Anti-HER2/neu therapy might be of use in the treatment of TCC.
Transitional cell carcinoma (TCC) of the urinary bladder is common in the UK, with over 15 000 new cases being diagnosed annually. The low mortality from superficial disease contributes to this being the second most prevalent cancer in the UK population. However, with the development of detrusor muscle invasion, mortality rates increase significantly, and over 50% of patients already have micrometastasis on diagnosis of detrusor muscleinvasive disease. Therefore, more aggressive clinical treatment needs to be applied, if there is a curative aim, in the form of radical surgery or radiotherapy plus adjuvant treatment (Skinner et al, 1991). Approximately 10 -15% of superficial or locally invasive (pTa/pT1) tumours progress to muscle invasion, and this risk is dependent on tumour stage and grade. For example, welldifferentiated (grade 1) tumours progress in only 2% of cases, whereas poorly differentiated (grade 3) tumours progress in up to 20% of cases. However, stage and grade are subject to 50% interand intraobserver variation (van der Meijden et al, 2000).
Therefore, more accurate prognostic factors are desirable, and genetic markers might fulfil this role (Reznikoff et al, 2000).
Polysomy 17 in TCC is a chromosome-specific event, independent of tumour polyploidy (Bartlett et al, 1999) and is associated with higher tumour stages and grades as well as disease recurrence, progression and decreased patient survival (Bartlett et al, 1998;Li et al, 1998;Watters et al, 2000). Polysomy 17 has been observed in 10% of grade 1 and 43% of grade 3 tumours (Watters et al, 2000) and 32% of pTa/T1 and 72% of pT2 tumours (Li et al, 1998). Gain of chromosome 17 may therefore be a useful marker of tumour progression.
The HER2/neu oncogene is located on chromosome 17 q11-21 and encodes for a tyrosine kinase trans-membrane growth factor receptor. Activation of the HER2/neu receptor, following autophosphorylation of the tyrosine kinase residues results in the activation of a cascade of intracellular proteins. Ultimately, the mitotic activity and metastatic potential of the cell increases (Underwood et al, 1995;Tzahar et al, 1996;Olayioye et al, 2000).
HER2/neu protein overexpression, assessed by immunohistochemistry (IHC), has been associated with increased tumour grade in TCC, but there is a wide variation in the literature of 2 -50% (Wright et al, 1990;Coombs et al, 1991;McCann et al, 1990;Underwood et al, 1995;Mellon et al, 1996). This may be due to the application of different antibodies and scoring systems. Similarly, although HER2/neu gene amplification rates are higher in muscleinvasive disease compared with superficial disease, there is a wide variation in the literature of 4 -32% (Coombs et al, 1991;Underwood et al, 1995;Mellon et al, 1996). Accurate assessment of the prognostic significance of HER2 in the progression of bladder cancer requires standardisation of the laboratory techniques. Current UK guidelines recommend the use of IHC, with the use of specific antibodies and scoring systems to assess protein overexpression, and fluorescence in situ hybridisation (FISH) to assess gene amplification. Our centre is one of the designated centres in the UK that has the facilities to apply such techniques (Ellis et al, 2000). This is particularly important if the HER2/neu oncogene is to be targeted with one of the various anti-HER2/neu therapies being used in the treatment of other cancers. In breast cancer, tumours that have evidence of HER2/neu gene amplification, or strong protein overexpression respond to treatment with the anti-HER2/neu monoclonal antibody Trastuzumab (Herceptin, Gnenetech Inc San Francisco, USA). Response rates of 50% have been observed in combination with chemotherapy and 26% as monotherapy in women with metastatic breast cancer as well as an increased time to progression Vogel et al, 2001).
The aim of the present study was to assess HER2/neu protein overexpression and gene amplification in 25 tumour pairs. The first tumour of the pair was pre (muscle) invasive and the second tumour (from the same patient) was postinvasive.
Patients
Patients with tumours that had progressed from superficial disease (pTa/pT1) to muscle-invasive disease (pT2) were identified from a bladder cancer database in the Department of Surgery, Glasgow Royal Infirmary. In order to assess HER2 abnormalities during disease progression to muscle invasive disease, pTa/pT1 and pT2 tumours from the same patient were compared. All patients had full clinical follow-up (age, date of diagnosis, cystoscopic followup, tumour stage and grade and survival). Ethical approval was obtained for these studies. (5 mg) Sections of formalin-fixed paraffin tissue were cut onto sialinised slides and baked at 561C overnight. All representative TCCs analysed had one section stained with haematoxylin and eosin (H&E), and were restaged and regraded by a specialist urological pathologist (KMG). The pathologist rejected 52 tumours initially selected for the study because of the absence of detrusor muscle in either the pre-or postinvasive tumour. In order to be accepted for the study, both pre-and postinvasive tumours had to have detrusor muscle in both specimens.
Fluorescence in situ hybridisation
The FISH methodology was followed as outlined: tissue sections were dewaxed and rehydrated, then subject to pretreatments with 0.2 N HCL for 20 min at room temperature, 8% sodium thiosulphate at 801C for 30 min, and 0.5% pepsin in 0.01 N HCL for 26 min at 371C. Tissue sections were postfixed in 10% neutral buffered formalin at room temperature for 10 min before dehydration in ascending grades of alcohol and air drying. These steps were carried out on a VP2000 robotic pretreatment slide processor (Vysis, UK, Ltd). The tissue sections were assessed for the extent of tissue digestion (Watters et al, 2000). Tissue sections were denatured in 70% formamide, 2 Â SSC, pH 7 -8 at 721C for 5 min on the Omnislide hybridisation module (Hybaid, UK, Ltd). Probes for the pericentromeric region of chromosome 17 (SpectrumGreent) and the locus specific probe for HER2 (SpectrumOranget) were used. For each section, 1 ml of each probe was added to 7 ml hybridisation mix (50% formamide, 2 Â SSC, 10% dextran sulphate) and 1 ml deionised water and denatured in a water bath at 721C for 5 min and then hybridised overnight at 371C. Posthybridisation washes were in 0.4 Â SSC, Nonidet 30, pH 7, at 721C for 2 min. The sections were mounted in 0.25 mg ml À1 DAPI antifade (Veactashield, UK) and viewed with a Leica DMLB microscope. A triple band pass filter block spanning the excitation and emission wavelengths of the SpectrumOranget and SpectrumGreent and DAPI was used in the analysis of the hybridisation. Image capture was achieved using a digital camera (Leica DC 200, Leica, UK).
Fluorescence in situ hybridisation scoring
Serially sectioned haemtoxylin-and eosin-stained tissue sections were first examined to localise areas of TCC. Fluorescence in situ hybridisation sections were then scanned at  400 magnification to localise the areas of interest. In total, three areas were identified and in each area 20 nuclei were assessed. Chromosome 17 copy number and HER2 copy number were assessed for each of the 20 nuclei at  1000 magnification. An average chromosome 17 copy number and HER2/neu copy number was obtained totalling the number of signals over the 60 nuclei and dividing by the number of signals. Control sections of normal bladder and HER2/neu gene amplified breast tumours were included in each run. The values for disomy were derived from the analysis of normal bladder postmortem tissue, as previously assessed (Bartlett et al, 1998;Watters et al, 2000). The average HER2/neu copy number was 1.7 (70.1) and hence a HER2/neu copy number greater than 2 (1.7 þ 3  s.d.) was defined as 'increased'. The average chromosome 17 copy number was 1.7 (70.06) and hence a polysomy 17 was defined as a chromosome 17 copy number greater than 1.88 (1.7 þ 3  s.d.). Gene amplification was defined as an HER2/Chromosome 17 ratio of greater than 2 (Bartlett, 2001), based on the value used in breast cancer diagnostics.
Immunohistochemistry
Antigen retrieval was performed by placing the slides in a pressure cooker containing 1 l of boiling water with 0.37 g (1 Â 10 -3 moles) of ethylenediaminetetracetic acid (EDTA). The pressure cooker was placed in a microwave (850 W) for 13.5 min, then the lid was removed and the slides were left to stand for another 20 min. The slides were then loaded onto an automated machine (NEXUS II, Ventana USA) with a rotating slide carousel. The following reagents were added in sequence automatically by the machine (all steps were performed at 371C, and reagents were purchased prepacked): (1) 0.1 ml of inhibitor (containing 1.1% hydrogen peroxide, which is metabolised by endogenous peroxidase), for 4 min; (2) 100 ml (0.63 g ml À1 ) of CB11 monoclonal primary antibody (IgG 1); (3) 0.1 ml each of amplifier A (IgG heavy and light chains) and B (IgG heavy chains) for 8 min. This binds to the previously bound primary antibody, increasing the number of antibodies at the site of the antigen, thereby increasing staining intensity; (4) 0.1 ml of biotinylated secondary antibody (IgG) for 8 min; (5) 0.1 ml of avidin-HRPO conjugate (horseradish peroxidase), which binds to the biotin, for 8 min; (6) 0.1 ml of diaminobenzidine (DAB) for 8 min. This chromagen produces a brown precipitate at the sites of the avidin -biotin reaction; (7) 0.1 ml of copper sulphate to enhance the brown precipitate; (8) 0.1 ml of haematoxylin and 0.1 ml of bluing agent containing lithium carbonate to stain the nuclei blue. The slides were then dehydrated through graded alcohols and mounted and fixed with xylene and DPX. Normal bladder tissue controls and breast cancers with HER2/neu gene amplification and strong protein overexpression were used as controls in each run. microscope. Only membrane staining was scored, with cytoplasmic staining being ignored. A 4-point scale was used: '0' if there was no membrane staining, '1' if there was weak membrane staining in at least 10% of cells, '2' if there was moderate membrane staining in at least 10% of cells, '3' if there was strong membrane staining in at least 10% of cells.
Patients
The average age of the patients was 71.3 years (range 43 -92) and there were 20 male and five female patients. The average time to progression from preinvasive disease was 20.6 months (range 2 -90). There was one postoperative death, eight patients were still alive and 16 (16/25, 64% had died from their disease. The average survival from the time of the diagnosis of muscle-invasive disease to death was 9.9 months (range 1 -57). In the majority of cases detrusor muscle invasion was preceded by either a pT1G3 (13 out of 26 cases) or a pTaG2 (10/26 cases) tumour. Tables 1a and 1b give the stages and grades of the tumours, the FISH and IHC results as well as the times to progression.
Fluorescence in situ hybridisation
The average HER2/neu copy number for the preinvasive tumours was 3.34 (range 1.73 -18.30) and 3.48 for the postinvasive tumours (range 1.90 -8.50), and these were not statistically different (P ¼ 0.086). Overall, 19 out of 25 (76%) of the preinvasive and 23 out of 25 (92%) of the postinvasive tumours had an increased HER2/neu copy number. The average chromosome 17 copy number for the preinvasive group was 2.58 (range 1.74 -3.68) and 3.16 (range 1.62 -8.48) for the postinvasive group, and again these were not statistically different (P ¼ 0.067). Overall 23 out of 25 (92%) of the preinvasive group and 24 out of 25 (96%) of the postinvasive group were polysomic for chromosome 17. The average HER2/chromosome 17 ratio in the preinvasive group was 1.34 and 1.17 in the postinvasive group; these values were not statistically different (P ¼ 0.31). These results are summarised in Table 2. Two tumours out of both the pre-and postinvasive groups were amplified for HER2/neu, representing three patients in total. The stages and grades of these four tumours are given in Table 4.
DISCUSSION
Contemporary models of bladder TCC progression suggest that tumours of a higher stage and grade have accumulated more genetic changes (Simon et al, 1998). It is thought that genetic changes occur in sequence, such that certain genetic changes are more common in pT2 compared with pTa/T1 tumours (Sauter et al, 1998). In the study by Simon et al (1998), the overall average number of genetic aberrations in pT1 tumours was 9.8, in comparison with 3.7 in pTa tumours, a statistically significant difference (Po0.01). The authors concluded that these two tumour groups are very different, both genetically and in terms of clinical behaviour, with the pT1 tumours more likely to progress to detrusor muscle-invasive disease than the pTa tumours, which did not progress. In the present study, there were 11 pTa and 14p T1 tumours (Tables 1a and 1b), but in contrast to the study by Simon et al, all the tumours in this study progressed to detrusor muscle invasion. Table 2 compares the pTa tumours with the pT1 tumours in terms of HER2 copy number, chromosome 17 copy number and protein overexpression, demonstrating that they were not statistically different. This suggests that the subgroup of pTa tumours that are genetically different from pT1 tumours, as suggested by Simon et al, are those that are unlikely to progress to pT2 disease. However, in the present study, all the pTa/pT1 tumours progressed to pT2 disease and as such had a completely different clinical course. Hence, in terms of tumour progression, tumour stage and grade (Table 4) appear to be less important than the actual genetic changes that a tumour has accumulated. This study suggests that those tumours that progress to pT2 disease have acquired significant HER2/neu abnormalities before muscle invasion. Oncogene activation is thought to occur late, and most genetic changes are thought to occur before disease progression (Tsao et al, 2000). It therefore appears that in tumours that progress to pT2 disease have already acquired HER2/neu abnormalities that occur before the onset of detrusor muscle invasion. Gene amplification rates were low, being present in 8% of both tumour groups, a value similar to previously published rates of 7 and 9% (Sauter et al, 1993;Underwood et al, 1995). The results are also similar to those recently published by our group where polysomy c-myc and CCND1 rates were higher than gene amplification rates (Watters et al, 2002). Polysomy has been shown to occur independently of tumour polyploidy and is not only chromosome specific but also closely related to tumour progression (Watters et al, 2002).
High levels of HER2/neu protein overexpression were also observed, with rates of 76 and 52% present in pTa/T1 and pT2 tumours, respectively. These values were not significantly different (P ¼ 0.06), again suggesting that high level protein overexpression occurs before tumour progression. It has been suggested that either transcriptional or post-transcriptional mechanisms are responsible for the observed difference between protein overexpression and gene amplification. This phenomenon has also been observed in breast cancers. In one study, 22 out of 79 (29%) of breast cancers had HER2/neu protein overexpression without HER2/neu gene amplification (Farabegoli et al, 1999). In a previous study by Sauter et al (1993), 89% of bladder tumours with HER2/ neu protein overexpression did not have gene amplification, results which are similar to the present study. Transcription rates, and hence HER2/neu protein expression are controlled by nuclear concentrations of the transcription factors such as GATA-3 and OB2-1 (Shiga et al, 1993;Hollywood and Hurst, 1995). Higher levels of such transcription factors, even in the absence of gene amplification, result in increased HER2/neu protein overexpression. Stomach cancer cell lines SNU-1 and SNU-16 have similar HER2/neu transcription rates with similar mRNA concentrations, but the SNU-1 cells express the HER2/neu protein at a higher level than the SNU-16 cells. This is due to preferential translation of the mRNA from the SNU-1 cells (Bae et al, 2001).
Anti-HER2/neu therapy has been used to treat breast cancers in a clinical setting with encouraging results. Herceptin is a monoclonal antibody directed against the HER2/neu protein, which has an antimitotic and antiangiogeneic effect on tumours cells. In breast cancer, response rates have been highest in tumours that have strong protein overexpression (which are also gene amplified). Overall response rates of 50% have been observed . Furthermore, synergism has been demonstrated between Herceptin and conventional chemotherpeutic agents like cisplatin (Baselga, 2001). The high rates of strong ('3 þ ') protein overexpression rates in the pTa/T1 tumours 60%, together with the high rates of polysomy 17 and increased HER2/neu copy number suggests that Herceptin might also be of value in the treatment of TCC. In particular, the application of anti-HER2/neu therapy to pTa or pT1 tumours that are most likely to progress to pT2 disease, based upon the presence of increased HER2 copy number, polysomy 17 and increased HER2/ neu protein overexpression might lower the rate of disease progression. There was no difference in the rates of polysomy 17 (P ¼ 0.21), HER2/neu copy number (P ¼ 0.34) or HER2/chromosome 17 (P ¼ 0.44) ratio between the pTa and pT1 tumours. The HER2 immunohistochemistry results for the 25 pairs of patients. Both '2+' and '3+' were considered positive and '0' and '1+' were considered negative. In total, 19 out of 25 76% of the preinvasive tumours were positive. The level of protein overexpression was less in the postinvasive tumours (13 out of 26, 50%). The tumours with HER2/chromosome 17 ratios of 2.18 and 2.09 are from the same patient, suggesting that gene amplification is present before the onset of muscle invasion and persists. The preinvasive G3 pT1 tumour with the highest level of amplification had a postinvasive HER2/chromosome 17 ratio of 1.45. The postinvasive G3 pT2 tumour with borderline gene amplification of 2.01 had a preinvasive HER2/chromosome 17 ratio of 0.96. | 4,400.2 | 2003-09-30T00:00:00.000 | [
"Medicine",
"Biology"
] |
Ultrafast Pulse Generation from Quantum Cascade Lasers
Quantum cascade lasers (QCLs) have broken the spectral barriers of semiconductor lasers and enabled a range of applications in the mid-infrared (MIR) and terahertz (THz) regimes. However, until recently, generating ultrashort and intense pulses from QCLs has been difficult. This would be useful to study ultrafast processes in MIR and THz using the targeted wavelength-by-design properties of QCLs. Since the first demonstration in 2009, mode-locking of QCLs has undergone considerable development in the past decade, which includes revealing the underlying mechanism of pulse formation, the development of an ultrafast THz detection technique, and the invention of novel pulse compression technology, etc. Here, we review the history and recent progress of ultrafast pulse generation from QCLs in both the THz and MIR regimes.
Introduction
Quantum cascade lasers (QCLs) are electrically pumped compact semiconductor light sources that were first demonstrated in the mid-infrared in 1994 by Faist et al. at Bell Lab [1] and in the terahertz (THz) frequency range by Köhler et al. at Scuola Normale Superiore in 2002 [2]. The QCL concept has enabled powerful and compact coherent light sources in previously inaccessible or unpractical mid-infrared and THz regions of the electromagnetic spectrum. In the mid-infrared area, QCLs have achieved an impressive performance with more than 5.6 W output power from a single facet [3][4][5][6], and with high wall-plug efficiency up to 31% at room temperature (RT) in a continuous wave (CW) operation [7]. Besides, high beam-quality single-mode long-wave infrared (LWIR) QCLs with record light extraction (2.0 MW cm −2 sr −1 for λ ≈ 10 µm, 2.2 MW cm −2 sr −1 for λ ≈ 9 µm, 5.0 MW cm −2 sr −1 for λ ≈ 8 µm) from a single facet in CW operation at 15 • C have also been demonstrated [8]. These results mark an important milestone in the lighting capability of inter-sub-band semiconductor lasers in the mid-infrared spectral ranges. Beyond the Restrahlen band (>50 µm), QCLs have also shown remarkable development: high output power over 1 W, far-field engineering on metal-metal waveguide, quantum limited linewidths and self-generated frequency combs have been demonstrated [9][10][11][12][13]. Although there remain challenges, the further development and exploitation of QCLs is crucial due to the unparalleled success of these devices in terms of their output power and wavelength agility in a compact, potentially inexpensive and user-friendly geometry.
As unipolar devices, photon emission in QCLs is based on intersubband transitions in the conduction band of quantum heterostructures. They exhibit ultra-short carrier lifetimes that are on the same (picosecond) scale as the photon lifetime, which leads to the absence of the relaxation resonant oscillations in the transient response of these devices and ultrafast gain dynamics. The ultrafast gain dynamics of QCLs, combined with Kerr nonlinearities, the group velocity dispersions (material dispersion, waveguide dispersion, and gain dispersion), and spatial hole burning determine the pulse formation dynamics in QCLs. These intersubband transitions feature strong third-order optical nonlinearities, due to the large optical matrix element between the excited states and the empty lower states, allowing parametric processes due to four-wave mixing (FWM) [36]. Through the cascade FWM process based on multiple laser longitudinal modes and low group velocity dispersion (GVD), free-running combs with frequency modulations have been achieved in MIR-QCLs (this is frequency modulated QCLs and hence in principle no pulse generation) [36,37]. In addition, it has been found that a finite linewidth enhancement factor in fast gain medium lasers leads to a considerable Kerr nonlinearities, more so than in interband lasers with slow gain dynamics, which means that shorter carrier lifetimes lead to a wider FWM gain bandwidth, which in turn supports wider multi-mode emission [37]. Carrier lifetimes in THz QCLs are an order of magnitude higher than that in MIR-QCLs, which in principle make pulse formation in THz QCLs easier than in MIR-QCLs. However, the semiconductor material is more dispersive at THz frequencies than in mid-infrared frequencies due to stronger coupling with the crystalline lattice (for instance, GVD of GaAs at 40 K at 3.5 THz is 250 times higher that at 7 µm) [12], thus dispersion compensation techniques, such as a chirped corrugation etched into the facet of the laser [12] or GTIs [26], have to be considered to form stable pulses. In multi-mode QCLs with Fabry-Perot cavities, the waves travelling in forward and backward directions are coupled as they share the same gain medium, which gives rise to spatial hole burning (SHB) that favors multi-mode emission and can help to further reduce the pulse duration of the short pulses in QCLs. However, on the other hand, SHB also results in pulse instabilities and non-stationary pulse generation [38]. Furthermore, it has been concluded that the combined effects of SHB, GVD and Kerr nonlinearities due to asymmetric gain give rise to the recently observed linear frequency chirp [39]. The self-starting frequency combs generated in QCLs can be improved by RF injection through active mode-locking [22,23] and even with harmonic mode-locking [28], which provides possibilities for higher repetition rates beyond the limitation from the laser cavity length. Recently, soliton structures have been observed in ring QCLs, which opens interesting physics questions in the lasers with fast gain dynamics.
The theoretical models used to investigate multi-mode dynamics and QCL combs include reduced rate equations [40,41], Maxwell Bloch equations [25,42], and Master equations [29,39,43,44]. The multi-mode reduced rate equations (Equations (1)- (4)) are based on interactions between electrons and photons through stimulated emissions, spontaneous emission, and stimulated absorptions. This model is very suitable for studying the timeresolved electron and photon transport dynamics and the steady-state analysis of the laser, such as light-current-voltage (LIV) curves. This model has been used to study the frequency tuning mechanisms [40] and ultra-fast mode switching dynamics in coupledcavity QCLs [29]. It is also adapted to include the external perturbations into the model, such as optical injections and optical feedback effects. The model has been used to study single-mode and multi-mode dynamics under optical feedback in QCLs [41]. However, as rate equations, this model does not include spatial dependence effects, such as SHB.
where N 3 (t) and N 2 (t) are the carrier populations in the upper and lower laser levels of the active medium (ULL and LLL), respectively, and S m (t) and ϕ m (t) are the photon population and the phase of the electric field in longitudinal mode m. The other input parameters include the injection efficiencies into ULL and LLL η 3 and η 2 , the drive current I, the number of periods in the active cavity M, spontaneous emission factor β sp , the carrier lifetimes τ 3 , τ 32 , τ 2 and photon lifetime τ p , the spontaneous emission lifetime τ sp , the linewidth enhancement factor α and the gain factor for mode m G m . The gain recovery time in QCLs can be described by the total carrier lifetime in ULL τ 3 in this model. The dependence of the optical gain on the population inversion and the amplitude-to-phase coupling are also included in this model. The Maxwell-Bloch equations combine the Bloch equation and the wave equation, and are a set of equations for the normalized envelope of the electric field, the polarization, and the population inversion in the gain medium. By considering the polarization of the electric field, which describes the interactions between the laser field and the gain medium, this model includes the effects of Kerr nonlinearities through the optical susceptibility. It also includes the coherent coupling between the populations, such as Risken-Nummedal-Graham-Haken (RNGH) instabilities induced by the coherent resonant tunneling between adjacent stages in the active region. In addition, this model has time and spatial (only z direction) as independent parameters, which can include the SHB effects originated from the standing waves in the FP laser cavities, which play an important role on multimode operation and pulse duration reduction in the QCL combs study. This model has been used to investigate self-starting mode-locking and the formation of optical instabilities in QCLs [45,46]. However, as a full model, it is difficult to understand the roles of each of the physical effects on the formed frequency combs or pulses in the QCLs.
The conventional Haus Master equation can be used to study how the pulse shape varies under the gain dispersion and Kerr nonlinearities in conventional diode lasers where the gain dynamics are not fast, such that the gain recovery time is longer than one laser round trip time as shown in Figure 1 [47]. Despite its popularity, the Haus Master equation approach does not account for light-matter coherent effects and, additionally in the case of active mode-locking, its validity requires sufficiently slow medium dynamics. However, in QCLs with fast gain dynamics, and pronounced coherence effects such as RNGH instabilities, the conventional Master equation does not include coherent effects. Furthermore, the Haus Master equation only applies for amplitude-modulated combs, whilst free-running QCLs combs are frequency modulated, predominantly governed by the phase dynamic. However, the Haus Master equation has recently been developed into the coherent Master equation [44] and the Reduced Master equation [39]. By considering the nature of the fast gain dynamics, the coherent Master equation is more suitable for modeling QCLs. The reduced Master equation can be used to reproduce the behavior of frequency modulation combs in QCLs and study the roles of SHB, FWM and GVD on the pulse formation in QCLs.
instabilities in QCLs [45,46]. However, as a full model, it is difficult to understand the roles of each of the physical effects on the formed frequency combs or pulses in the QCLs.
The conventional Haus Master equation can be used to study how the pulse shape varies under the gain dispersion and Kerr nonlinearities in conventional diode lasers where the gain dynamics are not fast, such that the gain recovery time is longer than one laser round trip time as shown in Figure 1 [47]. Despite its popularity, the Haus Master equation approach does not account for light-matter coherent effects and, additionally in the case of active mode-locking, its validity requires sufficiently slow medium dynamics. However, in QCLs with fast gain dynamics, and pronounced coherence effects such as RNGH instabilities, the conventional Master equation does not include coherent effects. Furthermore, the Haus Master equation only applies for amplitude-modulated combs, whilst free-running QCLs combs are frequency modulated, predominantly governed by the phase dynamic. However, the Haus Master equation has recently been developed into the coherent Master equation [44] and the Reduced Master equation [39]. By considering the nature of the fast gain dynamics, the coherent Master equation is more suitable for modeling QCLs. The reduced Master equation can be used to reproduce the behavior of frequency modulation combs in QCLs and study the roles of SHB, FWM and GVD on the pulse formation in QCLs.
RF Injection Locking of QCLs
Injection locking was originally used to transfer the spectral purity and stability of a master laser to a slave laser [48,49]. Typically, the master laser is a low-power spectrally pure laser, but the slave laser is a high-power and spectrally broad laser. When an optical seed from the master laser is injected into the cavity of the slave one, the slave laser can inherit the spectral and noise properties of the master laser. Simultaneously, the slave laser is able to maintain high power outcoupling. This technique shows great advantages for the amplification and stabilization of a single-mode laser. However, for broadband QCL it is not practical to realize the optical injection locking under the master oscillator power amplifier (MOPA) architecture as it requires locking many longitudinal modes together. Therefore, electrical injection locking has been developed to achieve this goal through locking the free spectral range (FSR) of the QCL to a microwave signal that is resonant with its round-trip frequency, instead of optically locking hundreds of longitudinal modes of the QCL, individually. Indeed, this approach has enabled active and hybrid mode-locking of inter-band semiconductor lasers [50][51][52]. Here, we emphasize this technique from theoretical and experimental aspects as it is the underpinning approach for ultrafast pulse generation and active mode-locking of QCLs.
When injection locking is presented, the beatnote of a laser has to be mentioned. In a QCL system, the beatnote is a series of discrete RF signals arising from the electrical beating of any two Fabry-Perot modes (fn+I − fn). For a 3-mm-long QCL cavity, the beatnote frequency is close to 13 GHz, as well as its harmonics at 26 GHz, 39 GHz… The fundamental beatnote is the most important parameter for active mode-locking, which can be
RF Injection Locking of QCLs
Injection locking was originally used to transfer the spectral purity and stability of a master laser to a slave laser [48,49]. Typically, the master laser is a low-power spectrally pure laser, but the slave laser is a high-power and spectrally broad laser. When an optical seed from the master laser is injected into the cavity of the slave one, the slave laser can inherit the spectral and noise properties of the master laser. Simultaneously, the slave laser is able to maintain high power outcoupling. This technique shows great advantages for the amplification and stabilization of a single-mode laser. However, for broadband QCL it is not practical to realize the optical injection locking under the master oscillator power amplifier (MOPA) architecture as it requires locking many longitudinal modes together. Therefore, electrical injection locking has been developed to achieve this goal through locking the free spectral range (FSR) of the QCL to a microwave signal that is resonant with its round-trip frequency, instead of optically locking hundreds of longitudinal modes of the QCL, individually. Indeed, this approach has enabled active and hybrid modelocking of inter-band semiconductor lasers [50][51][52]. Here, we emphasize this technique from theoretical and experimental aspects as it is the underpinning approach for ultrafast pulse generation and active mode-locking of QCLs.
When injection locking is presented, the beatnote of a laser has to be mentioned. In a QCL system, the beatnote is a series of discrete RF signals arising from the electrical beating of any two Fabry-Perot modes (f n+I − f n ). For a 3-mm-long QCL cavity, the beatnote frequency is close to 13 GHz, as well as its harmonics at 26 GHz, 39 GHz . . . The fundamental beatnote is the most important parameter for active mode-locking, which can be expressed as the sum of the frequency difference between any two adjacent longitudinal modes as given in Equation (5): The full width at half maximum (FWHM) of the beatnote signal is dominated by the frequency jittering of QCL modes. Similar to optical injection locking, by injecting an RF signal (f RF ) that is resonant with the FSR or beatnote (f beatnote ) into the QCL system, the spectral purity and stability of the low-noise external RF signal can be transferred to the QCL. From the complex amplitude evolution of the RF field in the system, the injection locking can be described using the following equation [53]: The injection-locking theory developed by Adler to describe the behavior of coupled nonlinear electronic oscillators [53]. However, it is ubiquitous in physical systems involving frequency locking between several oscillators such as lasers, mechanical oscillators, gyroscopes, etc. In Equation (6), ϕ represents the phase difference between the RF signal and the QCL internal electrical beating signal, ω RF is the angular frequency of the RF signal, ∆ω is the angular frequency of the beatnote, and ω L is the locking range, which can be further given in the following equation [54]: In Equation (7), ω 0 is the free-running oscillation angular frequency, Q is the oscillator q-factor, P inj is the injected power of RF source and P 0 is the optical power within the laser cavity. When the condition |ω RF − ∆ω| < ω L is satisfied, Equation (6) has a steady-state solution: sinϕ = (ω RF − ∆ω)/ω L . In this case, the beatnote is locked to the injected RF signal and changes with it. When the condition |ω RF − ∆ω| > ω L is satisfied, Equation (6) will fall out of the locking range and the beatnote will no long be equal to the external modulation frequency. For mode-locking a QCL, the injected RF frequency and power has to satisfy the locking conditions given above.
Direct RF modulation was firstly introduced to THz QCL community in 2007 by Barbieri et al. [55]. They modulated the bias current that was injected into a THz QCL and observed the appearance of sideband modes in the emission spectrum, with a spacing that could be continuously tuned up to 13 GHz. The most important phenomena observed in the experiment was that when the modulation frequency approached the round-trip frequency of photons circulating in the resonant cavity, the number of QCL sidebands was considerably increased. This phenomenon, already observed in traditional lasers, was confirmed in QCL for the first time and was also in agreement with the above injectionlocking theory. According to the Fourier transform, the broadened spectrum can potentially transfer to short pulses in time domain.
Thereafter, the injection locking of THz QCLs was studied in detail in the same group [54]. They investigated the longitudinal mode behavior of QCLs under different external modulation conditions. The first one was to fix the modulation frequency and change the RF modulation power. The second one was to fix the modulation power and change the modulation frequency. In both cases, a clear frequency "pulling effect" was observed as given in Ref [54]. They found a square-root dependence of the locking range with RF-power in agreement with classical injection-locking theory, as given in Equation (6). This THz QCL showed a locking range above 200 MHz, also in agreement with the theory described by Equation (7).
Then, injection locking and harmonic injection locking were also demonstrated in mid-infrared QCL through direct microwave modulation [56,57]. As shown in the lightcurrent-voltage (LIV) curve and the spectrum in Figure 2a,c, the QCL with a broadband emission spectrum spanning 8.0-8.6 µm is capable of delivering high optical power of over 2 W from a single facet in CW operation at approximately room temperature. Figure 2b also gives the scanning electron microscope (SEM) image of the high-power long-wave infrared QCL. Figure 2d shows the evolution of the beatnote (continuous branch) of the QCL as a function of the injected RF frequency (discrete branch). Figure 2e shows the beatnote frequency (magenta) as a function of the detuning δ between RF frequency f RF and the beatnote without RF injection ∆f 0 . The blue shows the frequency difference δf RF−beatnote between RF and beatnote as a function of detuning δ. From these two figures, we can clearly see that the beatnote frequency is locked to the injected RF frequency. However, the locking range is less than 1 MHz due to much higher intra-cavity power compared with the result in Ref [56]. This experiment was also in agreement with the injection-locking theory described by Equations (5)-(7). current-voltage (LIV) curve and the spectrum in Figure 2a, c, the QCL with a broadband emission spectrum spanning 8.0-8.6 μm is capable of delivering high optical power of over 2 W from a single facet in CW operation at approximately room temperature. Figure 2b also gives the scanning electron microscope (SEM) image of the high-power long-wave infrared QCL. Figure 2d shows the evolution of the beatnote (continuous branch) of the QCL as a function of the injected RF frequency (discrete branch). Figure 2e shows the beatnote frequency (magenta) as a function of the detuning δ between RF frequency fRF and the beatnote without RF injection Δf0. The blue shows the frequency difference δfRF−beatnote between RF and beatnote as a function of detuning δ. From these two figures, we can clearly see that the beatnote frequency is locked to the injected RF frequency. However, the locking range is less than 1 MHz due to much higher intra-cavity power compared with the result in Ref [56]. This experiment was also in agreement with the injection-locking theory described by Equations (5)-(7).
Mode-Locked THz QCLs
The class of laser is the dominating factor regarding its transient behavior that determines how it generates short pulses. For QCL, the photon lifetime (τ cav ) in laser cavity is in the order of magnitude 100-200 ps, while the lifetime of electrons (τ) on excited energy levels is in the order of a few picoseconds. This condition (τ cav >> τ) gives an exponential growth transient behavior in the switched-on dynamic regime of QCL. Hence, QCLs are class-A lasers, which theoretically presents a difficulty for ultrafast pulse generation through the Q-switch technique. Mode-locking is, therefore, the only possible choice to generate ultrafast light pulses from QCLs.
Mode-locking is a widely used technique to generate ultra-short and intense pulses from lasers. Generally, when a laser is in operation, there is more than one resonant frequency that can be amplified and propagate in its cavity, as schematically illustrated in Figure 3 (green waves). These frequencies are called longitudinal modes of a laser, which are determined by the cavity length and the active medium of the laser. If all the modes are in phase and the mode spacing between these modes is identical, the electric field of all these modes will interfere constructively. This will result in an ultra-short and intense pulse (red in Figure 3 below) in the laser cavity. It will propagate back and forth within the cavity and then be partially coupled out from the cavity mirrors at every round-trip time. Temporally, a train of pulses separated by the laser cavity round-trip time will be obtained. This is the so-called mode-locking: to put all the longitudinal modes in phase (i.e., equal mode spacing ∆ω and time-independent phase ϕ) is the core technology for mode-locking.
The class of laser is the dominating factor regarding its transient behavior that determines how it generates short pulses. For QCL, the photon lifetime (τcav) in laser cavity is in the order of magnitude 100-200 ps, while the lifetime of electrons (τ) on excited energy levels is in the order of a few picoseconds. This condition (τcav >> τ) gives an exponential growth transient behavior in the switched-on dynamic regime of QCL. Hence, QCLs are class-A lasers, which theoretically presents a difficulty for ultrafast pulse generation through the Q-switch technique. Mode-locking is, therefore, the only possible choice to generate ultrafast light pulses from QCLs.
Mode-locking is a widely used technique to generate ultra-short and intense pulses from lasers. Generally, when a laser is in operation, there is more than one resonant frequency that can be amplified and propagate in its cavity, as schematically illustrated in Figure 3 (green waves). These frequencies are called longitudinal modes of a laser, which are determined by the cavity length and the active medium of the laser. If all the modes are in phase and the mode spacing between these modes is identical, the electric field of all these modes will interfere constructively. This will result in an ultra-short and intense pulse (red in Figure 3 below) in the laser cavity. It will propagate back and forth within the cavity and then be partially coupled out from the cavity mirrors at every round-trip time. Temporally, a train of pulses separated by the laser cavity round-trip time will be obtained. This is the so-called mode-locking: to put all the longitudinal modes in phase (i.e., equal mode spacing ∆ and time-independent phase ) is the core technology for mode-locking. If we suppose that the electric field of one longitudinal mode, for example the m th one, is E m (t) = A m e 2πi( f m t+φ m ) + c.c. Adding the electric fields of all these resonant modes together will give us the laser emission in the time domain: where A m , f m and φ m are, respectively, the amplitude, frequency, and time-independent phase of the m th mode. δω and ∆ω m are, respectively, the cold cavity mode spacing and the frequency-dependent mode shifting induced by hot cavity. Φ m = 2π∆ f m t + φ m is the time-dependent phase of the m th mode and φ m is the time-independent phase of the m th mode.
We now consider how these parameters affect the temporal behavior of the electric field. The time-dependent phase varies between modes and is always changing with time. This will bring a 'random (unfixed) phase-relation' between the modes at any time and will result in continuous (non-periodic) wave emission in time, as shown in Figure 4a,b, which is calculated from Equation (8). We can see that the emission is not periodic, due to non-equal mode spacing bringing a time-dependent floating phase. This is why even a broadband laser does not give us stable pulse emission under free-running conditions. Now, if ∆f m = 0 (the spacing between these modes is constant), but φ m varies for all modes, laser emission will become periodic, with pulses being observed, but the pulse shape will be heavily deformed, as shown in Figure 4c where Am, and are, respectively, the amplitude, frequency, and time-independent phase of the m th mode. and Δ are, respectively, the cold cavity mode spacing and the frequency-dependent mode shifting induced by hot cavity. Φ = 2 ∆ + is the time-dependent phase of the m th mode and is the time-independent phase of the m th mode.
We now consider how these parameters affect the temporal behavior of the electric field. The time-dependent phase varies between modes and is always changing with time. This will bring a 'random (unfixed) phase-relation' between the modes at any time and will result in continuous (non-periodic) wave emission in time, as shown in Figure 4a,b, which is calculated from Equation (8). We can see that the emission is not periodic, due to non-equal mode spacing bringing a time-dependent floating phase. This is why even a broadband laser does not give us stable pulse emission under free-running conditions. Now, if Δfm = 0 (the spacing between these modes is constant), but ϕm varies for all modes, laser emission will become periodic, with pulses being observed, but the pulse shape will be heavily deformed, as shown in Figure 4c As discussed above, the key mission of mode-locking is to remove or fix the time dependent phase term and make the mode spacing and phase of a laser to be identical. As discussed above, the key mission of mode-locking is to remove or fix the time dependent phase term and make the mode spacing and phase of a laser to be identical. Generally, when a laser is mode-locked and periodic pulses are generated, δ f and φ m will be automatically fixed due to the "phase-matched" modulation imposed on these modes.
How can we fix the free spectral range δf of a laser and keep the modes in phase? There are many ways that can be used to achieve this, including active mode-locking, passive mode-locking and hybrid mode-locking. Each type of mode-locking can be also realized by many different detailed techniques, such as direct current modulation [58], acoustic-optic modulation [59], saturable absorption [60], and nonlinear Kerr effect [61], etc.
Here we present active mode-locking as it is the most adapted for pulse generation from QCLs. Generally, we employ an electrical modulation ω M , which is monochromatic and very close to the mode spacing δω, to modulate a laser directly (i.e., modulation at the round trip of the cavity). Firstly, let us consider the modulation effect on the frequency ω m , as is illustrated in Figure 5a. Before modulation is applied, the free-running emission mode spacings are not identical δω m = δω m+1 . When modulation is applied, the central frequency ω m will transfer a part of its energies to its modulated sidebands (ω m + ω M , ω m − ω M ) and will be close in frequency to the two free-running modes (ω m−1 , ω m+1 ) of the cavity. If the modulation power is strong enough, it will force the free-running frequencies (ω m−1 , ω m+1 ) to move towards the sidebands' frequencies positions at (ω m + ω M , ω m − ω M ) until they totally overlap ω m−1 = ω m − ω M , ω m+1 = ω m + ω M . Finally, the mode spacing will be locked to the modulation frequency δω m = δω m+1 = ω M , as presented in Figure 5b.
ωm, as is illustrated in Figure 5a. Before modulation is applied, the free-running emission mode spacings are not identical δωm ≠ δωm+1. When modulation is applied, the central frequency ωm will transfer a part of its energies to its modulated sidebands (ωm + ωM, ωm − ωM) and will be close in frequency to the two free-running modes (ωm−1, ωm+1) of the cavity. If the modulation power is strong enough, it will force the free-running frequencies (ωm−1, ωm+1) to move towards the sidebands' frequencies positions at (ωm + ωM, ωm − ωM) until they totally overlap ωm−1 = ωm−ωM, ωm+1 = ωm + ωM. Finally, the mode spacing will be locked to the modulation frequency δωm = δωm+1 = ωM, as presented in Figure 5b. Above we have analyzed the modulation effect only on the central frequency ωm. We can analyze the other frequencies in the same way: ωm−1 to obtain δωm−1 = δωm = ωM, ωm−2 to get δωm−2 = δωm−1 = ωM, ωm−3 to obtain δωm−3 = δωm−2 = ωM.... At the end, we have δωm−2 = δωm−1 = δωm = δωm+1 = δωm+2 = ··· = ωM, i.e., the mode spacing over the whole spectrum of a laser emission will be fixed to the modulation frequency, as shown in Figure 5c. Simultaneously, the time-independent phase ϕm will also be identical among all the modes, due to the synchronization of the modes through the active modulation. As mentioned above in Figure 4, once the mode spacing is locked and the phases are identical, the laser emission in the time domain will be pulsed and active mode-locking will be realized.
Unlike traditional semiconductor lasers, the QCL transition takes place between two inter-sub-band energy levels originating from nanoscale confinement of electrons in quantum wells. As mention above, this leads to fast gain recovery time, orders of magnitude shorter than in interband lasers [19][20][21]23], and a time considerably shorter than the photon round-trip cavity time. This is believed to prevent these devices from being modelocked (multiple pulses are generated within the QCL cavity) and, thus, unable to generate short pulses using passive approaches.
However, it has been shown recently that these devices can be actively mode-locked, where the QCL is modulated at microwave frequencies, to generate a train of picosecond . . = ω M , i.e., the mode spacing over the whole spectrum of a laser emission will be fixed to the modulation frequency, as shown in Figure 5c. Simultaneously, the time-independent phase φ m will also be identical among all the modes, due to the synchronization of the modes through the active modulation. As mentioned above in Figure 4, once the mode spacing is locked and the phases are identical, the laser emission in the time domain will be pulsed and active mode-locking will be realized.
Unlike traditional semiconductor lasers, the QCL transition takes place between two inter-sub-band energy levels originating from nanoscale confinement of electrons in quantum wells. As mention above, this leads to fast gain recovery time, orders of magnitude shorter than in interband lasers [19][20][21]23], and a time considerably shorter than the photon round-trip cavity time. This is believed to prevent these devices from being mode-locked (multiple pulses are generated within the QCL cavity) and, thus, unable to generate short pulses using passive approaches.
However, it has been shown recently that these devices can be actively mode-locked, where the QCL is modulated at microwave frequencies, to generate a train of picosecond pulses [22,62]. The key to these demonstrations has been the development of new ultrafast techniques for the THz range. In the first case, detection of the emitted pulse train has been made possible by phase-locking the QCL repetition rate and carrier frequency to a high order harmonic of the repetition rate of a mode-locked femtosecond laser. This technique permits coherent detection of the THz electric field, and allows the control of the carrier-envelope phase shift of the QCL. Its disadvantage is that it undersamples the electric field of the pulse train of lasers in the time domain.
An alternative ultrafast detection technique called the "injection seeding technique" has also been developed [63]. This technique has the full capability to measure all the information of QCL emission in time domain, including phase, amplitude, intensity, spectrum, and full electric field, as shown in Figure 6. This provides the possibility to observe directly pulse-train generation and has paved the way for QCL mode-locking demonstration directly in time domain.
Immediately after the development of this injection seeding technique, mode-locking of THz QCLs was realized and demonstrated in time domain [32][33][34]64]. A series of important work was published on this research topic, showing that THz QCLs could be mode-locked for short pulse generation. Figure 7a shows the THz intensity emitted by an actively mode-locked QCLs over picosecond time scales (without a seed). Both the initiation of mode-locked pulses and the steady-state regime were examined. For bias conditions well above threshold, a sinusoidal modulation of the emission was achieved; however, when the QCL was biased around threshold and the round-trip modulation was strong, Gaussian-shaped transform limited mode-locked pulses with a full width at half maximum (FWHM) of 19 ps were observed. Figure 7b shows the electric field (left) and its corresponding spectra (right) of QCL emission with and without round-trip modulation, respectively, in the time domain using the injection seeding technique. The method relied on synchronizing the mode-locked pulses to a reference laser and was applied to 15-ps pulses generated by a 2-THz QCL. The pulses from the actively mode-locked laser were completely characterized in field and in time with a sub-ps resolution, allowing us to determine the amplitude and phase of each cavity mode. Figure 7c shows the zoom in of a light pulse from the mode-locked QCL. We can clearly resolve the oscillation of the electric field of the laser emission.
Micromachines 2022, 13, x FOR PEER REVIEW 10 of 19 pulses [22,62]. The key to these demonstrations has been the development of new ultrafast techniques for the THz range. In the first case, detection of the emitted pulse train has been made possible by phase-locking the QCL repetition rate and carrier frequency to a high order harmonic of the repetition rate of a mode-locked femtosecond laser. This technique permits coherent detection of the THz electric field, and allows the control of the carrier-envelope phase shift of the QCL. Its disadvantage is that it undersamples the electric field of the pulse train of lasers in the time domain. An alternative ultrafast detection technique called the "injection seeding technique" has also been developed [63]. This technique has the full capability to measure all the information of QCL emission in time domain, including phase, amplitude, intensity, spectrum, and full electric field, as shown in Figure 6. This provides the possibility to observe directly pulse-train generation and has paved the way for QCL mode-locking demonstration directly in time domain. relied on synchronizing the mode-locked pulses to a reference laser and was applied to 15-ps pulses generated by a 2-THz QCL. The pulses from the actively mode-locked laser were completely characterized in field and in time with a sub-ps resolution, allowing us to determine the amplitude and phase of each cavity mode. Figure 7c shows the zoom in of a light pulse from the mode-locked QCL. We can clearly resolve the oscillation of the electric field of the laser emission. Since then, mode-locked THz QCLs have been experimentally demonstrated using different detection approaches as discussed above. However, the exact mechanism of modelocking in QCLs is still unknown, which strongly limits new avenues to be explored to generate shorter and more intense laser pulses. Over a series of samples and measurements by researchers [23], it has been found that, contrary to a long-standing belief that the QCL gain dynamics are the limiting factor, the key mechanism is in fact a nonlinear interaction between the pulse generated and the applied electrical modulation [23], as shown in Figure 8a. This is important information and has permitted new avenues to be explored to generate shorter and intense pulses. mode-locking in QCLs is still unknown, which strongly limits new avenues to be explored to generate shorter and more intense laser pulses. Over a series of samples and measurements by researchers [23], it has been found that, contrary to a long-standing belief that the QCL gain dynamics are the limiting factor, the key mechanism is in fact a nonlinear interaction between the pulse generated and the applied electrical modulation [23], as shown in Figure 8a. This is important information and has permitted new avenues to be explored to generate shorter and intense pulses. Figure 8b shows the Maxwell-Bloch simulations of the gain recovery time (T1). It was calculated using Maxwell-Bloch finite-difference time-domain simulations in a two-level system [23]. The procedure is detailed in depth in Ref. [31]. Here, a dephasing time about Figure 8b shows the Maxwell-Bloch simulations of the gain recovery time (T 1 ). It was calculated using Maxwell-Bloch finite-difference time-domain simulations in a twolevel system [23]. The procedure is detailed in depth in Ref. [31]. Here, a dephasing time about 0.6 ps from the full-width at half-maximum of the gain and a total waveguide loss of 12 cm −1 from the first pass gain measurements of the longitudinal optical (LO) phonon-depopulation-based QCL were used. A time data with a gain recovery time of 5 ps showed the best 'fit' with the data. The ultrafast gain recovery time measured here, which did not limit pulse generation, could be used as an advantage to generate more intense and shorter pulses if short intense electrical pulses could be used to switch on the QCL gain. For example, a Gaussian or Lorentzian profile could be used. Although difficult to generate electronically, optically generated electrical pulses using ultrafast lasers combined with ultrafast materials are feasible and these could then be used to switch the QCL on sub-picosecond time scales. Further techniques that could circumvent the current limitations would be the application of greater microwave power for higher pulse energies and the application of hybrid mode-locking techniques to shorten the pulses to sub-10 ps values. Figure 8c top shows the spectra of a seeded (red) and a mode-locked (black) QCL; bottom shows the phases of the eight mode-locked longitudinal modes (green triangles).
Pulse Shortening in Mode-Locked THz QCLs
As of 2013, active mode-locked THz QCLs have been demonstrated through different measures in different groups. However, the pulse width of mode-locked QCLs is quite large, falling between 10 and 20 ps. Researchers have attempted many different approaches, including using broad-bandwidth QCLs, designing different geometry structures, adopting hybrid mode-locking techniques, etc., to compress the pulse width below 10 ps but without any success despite many active research efforts [54,56,57,65].
In 2016, the research group at TU Wien Vienna showed that a single THz pulses as short as 2.5 ps could be generated from a QCL [66]. However, this was not a train of pulses, with subsequent pulses broadening as the QCL was not actively mode-locked.
To realize a mode-locked pulse train, a monolithic on-chip dispersion compensation scheme to shorten the THz pulses of mode-locked QCLs was proposed [26]. This was based on the realization of a small coupled cavity resonator that acted as an 'off resonance' Gires-Tournois interferometer (GTI), permitting large THz spectral bandwidths to be compensated, as shown in Figure 9. In this work, the THz pulses of mode-locked QCLs was considerably shortened from 16 ps to 4 ps. This permitted the compression of THz pulses of mode-locked QCLs beyond the 10 ps barrier that had stood for several years. This result marks an important milestone in exploring ultrafast light-pulse generation from mode-locked QCLs. The novel application of a GTI also opens up a direct route to sub-picosecond and single cycle pulses in the THz range from a compact semiconductor source.
Pulse Generation in Mid-Infrared QCLs
The development of mid-infrared QCLs is far ahead of THz QCLs [3,4,[6][7][8]67], but its mode-locking is lagging behind the THz QLCs due to an even shorter gain recovery time (∼1 ps). Following the mode-locking of THz QCLs, generating ultrashort pulses from midinfrared QCLs has also undergone considerable improvement in the past a few years. The first experimental demonstration was realized in 2009 by Capasso's group at Harvard [62] and the theoretical demonstration of active mode-locking of such QCLs was reported in 2015 by Belyanin's group in Texas [25]. They investigated the dynamics of actively modulated mid-infrared QCLs using space-and time-domain simulations of coupled density matrix and Maxwell equations, with resonant tunneling current taken into account. They showed that it was possible to achieve active mode-locking and stable generation of picosecond pulses in QCLs by bias modulation of a short section of a monolithic Fabry-Pérot cavity.
Pulse Generation in Mid-Infrared QCLs
The development of mid-infrared QCLs is far ahead of THz QCLs [3,4,[6][7][8]67], but its mode-locking is lagging behind the THz QLCs due to an even shorter gain recovery time (~1 ps). Following the mode-locking of THz QCLs, generating ultrashort pulses from midinfrared QCLs has also undergone considerable improvement in the past a few years. The first experimental demonstration was realized in 2009 by Capasso's group at Harvard [62] and the theoretical demonstration of active mode-locking of such QCLs was reported in 2015 by Belyanin's group in Texas [25]. They investigated the dynamics of actively modulated mid-infrared QCLs using space-and time-domain simulations of coupled density matrix and Maxwell equations, with resonant tunneling current taken into account. They showed that it was possible to achieve active mode-locking and stable generation of picosecond pulses in QCLs by bias modulation of a short section of a monolithic Fabry-Pérot cavity.
In the same year, active mode-locking of mid-infrared QCLs at a wavelength of 5 µm was experimentally demonstrated in a free-space external ring cavity QCL, as shown in Figure 10a [24]. The laser operated at room temperature and stayed in mode-locking state over the full dynamic range of injection currents. Figure 10b,c shows the estimated pulse width and corresponding spectra using a four-subband model for the QCL active region, which ranges between 10 ps and 45 ps depending on the cavity length. In the paper, the theoretical modeling showed that one could achieve much shorter pulses and broader phase-locked frequency combs by modulating the pumping with shorter and sharper pulses instead of the sinusoidal modulation. This finding is completely in agreement with the experimental observation in Ref. [23]. Recently, Faist's group at ETH Zurich also demonstrated an approach capable of producing near-transform-limited sub-picosecond pulses (630 femtosecond) with several watts of peak power at a wavelength of around 8μm using a diffraction grating compressor, as shown in Figure 10d [27]. Starting from a frequency-modulated phase-locked state, ultrashort high-peak-power pulses were generated via spectral filtering, gain modulationinduced spectral broadening, and external-pulse compression. They investigated the pulse width of QCLs emission using a novel asynchronous sampling method, coherent beatnote interferometry, and interferometric autocorrelation. Figure 10e shows the free- Recently, Faist's group at ETH Zurich also demonstrated an approach capable of producing near-transform-limited sub-picosecond pulses (630 femtosecond) with several watts of peak power at a wavelength of around 8µm using a diffraction grating compressor, as shown in Figure 10d [27]. Starting from a frequency-modulated phase-locked state, ultrashort high-peak-power pulses were generated via spectral filtering, gain modulationinduced spectral broadening, and external-pulse compression. They investigated the pulse width of QCLs emission using a novel asynchronous sampling method, coherent beatnote interferometry, and interferometric autocorrelation. Figure 10e shows the free-running and round-trip-modulated optical spectra, respectively. It can be clearly observed that a considerable increase in spectral bandwidth has been achieved in the latter case. Such a temporal modulation brings a strong overall amplitude modulation, accomplished with the decrease of emitted average power due to increased gain saturation, as shown in Figure 10f. This is another milestone in ultrafast pulse generation from QCLs following the 4 ps THz pulse generation from mode-locked THz QCLs. These achievements presented above are also listed in Table 1 given below:
Conclusions and Perspectives
To conclude, pulse generation through mode-locking of QCLs has undergone considerable development in the past decade. Owing to the fast dynamics, QCLs were thought to be very difficult to mode-lock. Through active mode-locking and pulse compression, an ultrashort pulse train as short as 4 ps in THz and 0.6 ps in mid-infrared regime has been realized from mode-locked QCLs. These results push QCLs to a new milestone, enabling a range of applications in fundamental research, high-tech industry and defense technology, particularly in mid-infrared and THz nonlinear optics where high pulse energies are typically required. With further development of this technology, many new QCL-based applications will emerge in the near future, potentially replacing or being complementary to OPA technologies.
Author Contributions: F.W. organized the manuscript preparation of the review paper. Z.C. and F.W. prepared the initial draft. X.Q. prepared the "ultrafast dynamics of QCL" section. M.R. and S.D. revised and improved the whole paper. All authors have read and agreed to the published version of the manuscript.
Funding:
The authors acknowledge startup funding from the Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, the International Quantum Academy, the funding from European Union under the Horizon 2020 research and innovation programs FET-Open grant EXTREME-IR 964735, the French National Research Agency (ANR-18-CE24-0013-02-"TERASEL"), Australian Research Council Discovery Project (DP160103910 and DP200101948), National Science Foundation-"Room temperature high-power terahertz semiconductor laser with high-quality beam shape and stable spectral emission" Grant # 2149908. X.Q. acknowledges support under the Advance Queensland Industry Research Fellowships program.
Conflicts of Interest:
The authors declare no conflict of interest. | 10,739.4 | 2022-11-24T00:00:00.000 | [
"Engineering",
"Physics"
] |
ARL11 correlates with the immunosuppression and poor prognosis in breast cancer: A comprehensive bioinformatics analysis of ARL family members
ADP-ribosylation factor-like protein (ARL) family members (ARLs) may regulate the malignant phenotypes of cancer cells. However, relevant studies on ARLs in breast cancer (BC) are limited. In this research, the expression profiles, genetic variations, and prognostic values of ARLs in BC have been systematically analyzed for the first time using various databases. We find that ARLs are significantly dysregulated in BC according to the TCGA database, which may result from DNA methylation and copy number alteration. Prognostic analysis suggests that ARL11 is the most significant prognostic indicator for BC, and higher ARL11 predicts worse clinical outcomes for BC patients. Further functional enrichment analysis demonstrates that ARL11 enhances the immunosuppression in BC, and dysregulation of ARL11 is significantly associated with immune infiltration in various types of cancer. Our results demonstrate the potential of ARL11 as an immune therapeutic target for BC.
Introduction
Breast cancer is the leading cause of cancer deaths in women [1]. Hence, there is an urgent need to understand the molecular mechanisms that underlie the tumorigenesis and progression of BC, in order to pave the way for the development of novel biomarkers with predictive and therapeutic potential for BC. The ADP-ribosylation factor (ARF) family members of small GTP-binding (G) proteins belong to the Ras superfamily and include the ARFs, SAR1, Tripartite Motif-containing protein 23 (TRIM23), and ARF-like (ARL) proteins. ARLs, which structurally resemble other ARF family members, have more than 20 types in humans and are integral in controlling membrane traffic, vesicular transport, cytoskeleton organization, and cell migration via a cyclic switch between GTP-bound state (active) and GDP-bound state (inactive) [2]. Moreover, dysregulation of ARLs has enormous effects on various diseases, including cancer [3]. Mounting evidence suggests that aberrant expression of ARLs contributes greatly to the tumorigenesis of various types of tumors, including hepatocellular carcinoma, osteosarcoma, and colorectal cancer [4][5][6][7][8][9][10]. For example, decreased expression of ARL2 markedly suppresses cervical cancer cell proliferation, migration, and invasion [8]. In addition, ARL8B is pivotal for the 3D invasive growth of prostate cancer cells in vitro and in vivo [11]. Downregulation of ARL4C can even suppress AKT signaling and decrease cell proliferation in lung adenocarcinoma cells [12], and ARL13B is conducive to the progression of gastric cancer in vitro and in vivo by modulating smoothened trafficking and activating Hedgehog signaling [13]. Finally, ARL11 is also implicated in a number of familial cancer types [14][15][16]. All of this indicates the promise of ARLs as therapeutic targets for tumors. However, our existing knowledge is limited on the prognostic value and biomedical functions of ARLs in BC.
In this paper we investigated the expression profiles, genetic alterations, and prognostic values of ARLs in BC, and we identified ARL11 as a vital prognostic indicator for invasive BC. Higher expression of ARL11 was correlated with adverse clinical outcomes of BC patients. Furthermore, enrichment analysis indicated that ARL11 might also significantly boost immunosuppression for BC. Our results highlight the specific role of ARL11 in BC, which may assist in the future development of novel precision therapeutics and biomarkers for BC.
UALCAN database analysis
UALCAN is a comprehensive, user-friendly, and interactive web-portal that executes the analysis of The Cancer Genome Atlas (TCGA) data (http://ualcan.path.uab.edu/index.html) [17]. We used this database to evaluate mRNA expression levels of ARLs between normal samples and breast cancer and to compare the expression differences among different cancer subtypes. Student's t-test p-values less than 0.05 were considered to indicate statistically significant test results.
cBioPortal database analysis
The cBio Cancer Genomics Portal (cBioPortal) (http://www.cbioportal.org/) imparts visualization tools for more than 5,000 tumor samples from 232 cancer studies in the TCGA database [18][19][20]. In this paper we used this tool to analyze the Breast Invasive Carcinoma (TCGA, Firehose Legacy, n = 1,108) cohort in particular. The search parameters included mutations, mRNA expression Z-scores and putative copy-number alterations from GISTIC. Co-expression analysis was also executed according to online instructions of the cBioPortal.
to evaluate the impact of methylation status on the expression levels of ARL3, ARL4C, ARL4D, and ARL11 in BC.
The Cancer Regulome tools and data analysis
We also employed the Cancer Regulome tools and data (http://explorer.cancerregulome.org/) from the TCGA database in order to create circos plots to show the genomic location of ARL11 and its related-genes in BC. Spearman correlation was utilized to reveal the pairwise correlation between genes. The circos plots only display genes with p-value <0.01.
Functional enrichment analysis
For our purposes, we took a Spearman's correlation coefficient that exceeded 0.15 to indicate a correlation between ARL11 and its co-expressed genes and then used the clusterProfiler to perform GO (Gene Ontology) and KEGG (Kyoto Encyclopedia of Genes and Genomes) enrichment analysis of co-expressed genes that were correlated with ARL11. We then employed Metascape online software (http://metascape.org) to frame the interaction network of enrichment terms. All analysis was implemented with default software parameters [23].
TIMER 2.0 database analysis
The Tumor Immune Estimation Resource 2.0 (TIMER 2.0) (http://timer.comp-genomics.org/ ) is a computational tool for the methodical analysis of the correlation between the target gene expression and the tumor-infiltrating immune cells of 32 cancer types [24,25]. In this research, we used TIMER 2.0 to analyze the relationship between ARL11 expression and immune infiltration in pan-cancer.
Furthermore, we evaluated the expression status of 18 ARLs in different subtypes of BC including luminal, HER2-positive, and triple-negative BC (TNBC), and the results showed that expression status differed significantly across different BC subtypes. As shown in S2 Fig, ARL2, ARL10, ARL13B, and ARL17B were markedly downregulated in HER2-positive BC compared to the luminal and triple-negative subtypes. Additionally, ARL9 and ARL16 were upregulated in TNBC, and ARL4D and ARL15 were significantly downregulated in TNBC compared to the luminal and HER2-positive subtypes.
We also evaluated the expression status of ARLs in the TIMER 2.0 database and found 18 ARLs significantly dysregulated in BC. Moreover, the expression status of most ARLs was consistent between the two independent datasets. The detailed information for p-values is shown in S1 and S2 Tables.
Genetic alterations of ARLs in BC
In order to acquire a comprehensive understanding of ARLs in BC, we analyzed the genetic alterations of ARLs using the cBioPortal. We found diverse degrees of genetic variations in the 22 ARLs, ranging from 2.6% to 15%, among which ARL8A had the highest mutation ratio (15%) (Fig 2A and S3 Fig). The mutation ratios of ARL16 and ARL17B were comparatively higher, up to 10% and 11%, respectively (Fig 2A). In addition, the modification frequencies of all ARLs as well as ARLs with higher mutation rates (mutation ratio � 10%) in different subtypes of BC are shown in Fig 2B. We also found copy number amplification (CNA) for most
PLOS ONE
ARLs was in several different subtypes of BC but not for metaplastic breast cancer (MBC). Furthermore, we employed Spearman's correlation analysis to explore the relationship between the status of promoter DNA methylation and levels of ARLs mRNA expression. Interestingly, as shown in S3 Table, there was negative correlation between mRNA expression and DNA methylation for most ARLs, and the correlation coefficients between mRNA expression and DNA methylation for ARL3, ARL4C, ARL4D, and ARL11 were the highest (R � 0.5, p < 0.05) (Fig 2C). Therefore, we also performed specific methylation site analysis of ARL3, ARL4C, ARL4D, and ARL11 with the MEXPRESS dataset (Table 1).
Intriguingly, our results indicated that methylation status in different sites might regulate the expression level of ARL3. For instance, ARL3 expression was negatively correlated with the DNA methylation status (correlation coefficient ranged from -0.071 to -0.484, p < 0.001). Nevertheless, methylation of cg06361405 at 3'UTR was positively associated with ARL3 expression (R = +0.337, p < 0.01). Additionally, we found a significant inverse association between ARL4C, ARL4D, and ARL11 expression and DNA methylation at all genomic regions (correlation coefficients ranged from -0.123 to -0.494, p < 0.001) ( Table 1 and Fig 3). Moreover, cg01425731 at the 5'-UTR of the first exon was the most significant site of ARL11, and hypomethylation at this site prompted the upregulation of ARL11 in BC (R = -0.669, p < 0.001). Collectively, our results indicate that CNA and DNA methylation may feature prominently in the genetic regulation of most ARLs.
The prognostic value of different ARLs in BC
We further explored the prognostic value of different ARLs in BC patients based on the E-MTAB-365 cohort by using the Kaplan-Meier (K-M) plotter to inquire into the effects of ARLs on the recurrence-free survival (RFS) of BC patients. As displayed in Fig 4A and 4B, this analysis revealed a remarkable association between higher expression levels of ARL2, ARL5A, ARL9, and ARL11 and worse RFS, whereas higher expression levels of ARL15 and ARL17A had noteworthy ties to better RFS. These results,
Correlation between mRNA level of ARL11 and immunosuppression in BC
To unravel the mechanisms underlying the prognostic value of ARL11 in BC, we analyzed the possible molecular functions of ARL11 based on the TCGA database. The circos plot in Fig 5A shows the genomic location of ARL11 and all ARL11-related genes in BC. As displayed in Fig 5B and 5C, GO (Gene Ontology) and KEGG (Kyoto Encyclopedia of Genes and Genomes) enrichment analysis indicated that ARL11-associated genes were significantly linked to immunosuppressing processes such as T cell differentiation (gene ratio = 71/1130, p = 5.22E-30), PD-L1 expression and the PD-1 checkpoint pathway in cancer (gene ratio = 21/659, p = 7.22E-06), the and Toll-like receptor signaling pathway (gene ratio = 29/659, p = 2.28E-09). To identify the internal associations of these processes, we filed the top 20 clusters of GO and KEGG as a network plot using Metascape online tools in which we deemed a Kappa similarity > 0.3 to indicate a connection (Fig 5D). In addition, we investigated the relationship between ARL11 expression and immune infiltration in pan-cancer and found that dysregulation of ARL11 was significantly associated with immune infiltration in various cancer types (S4 Table). Notably, the correlation between ARL11 expression and immune infiltration varied across different subtypes of BC. In brief, function enrichment analysis and correlation analysis indicated that the prognostic value of ARL11 may result from its role in enhancing tumor immunosuppression.
Discussion
The above findings have demonstrated that ARLs are involved in cancer progression. Nevertheless, the clinical relevance of ARLs in BC and their potential mechanisms are not fully understood. To this end, we examine the role of ARLs in BC using bioinformatics analysis. Interestingly, co-expression analysis showed high correlations among ARLs in BC. A number of studies have indicated the existence of collaboration between small GTP protein family members in regulating tumor progression [26][27][28], and our bioinformatics analysis further demonstrates that ARLs may indeed function collaboratively in BC.
In accordance with the expression status of ER, PR and HER2, BC can be categorized into four subtypes: luminal A (ER + or PR + /HER2 -), luminal B (ER + or PR + /HER2 + ), HER2 positive (HER2 + ), and triple-negative BC (TNBC, ER -/PR -/HER2 -) [29]. Using the UALCAN online tool, we evaluated the expression levels of ARLs in BC and normal tissues, and the results indicate that the expression levels of several ARLs vary significantly across different subtypes of BC. Therefore, our results provide evidence that the expression profiles of ARLs are BC subtypes specific.
Our genetic analysis showed that genetic alterations, including CNA and DNA methylation status, are engaged in dysregulation of ARLs in BC. We found that DNA methylation status conversely relates to the mRNA expression of several ARLs in BC, such as ARL3, ARL4C, ARL4D, and ARL11 (R � 0.5, p < 0.05). Multiple sites of ARL3, ARL4C, ARL4D, and ARL11 genes were found to be hypomethylated in BC patients. DNA methylation is widely recognized as a major epigenetic regulation involved in different stages of tumorigenesis and cancer development. The hypomethylation at the cg24441922 site in particular has been found to contribute to the dysregulation of ARL4C in lung cancer [30], and we found the same result for BC. In short, we found correlation between the DNA methylation status and dysregulation and oncogenic functions of most ARLs in BC.
Considering the oncogenic impact of ARLs, we further performed the prognostic analysis of 22 ARLs in BC, and found that6 ARLs might serve as potential prognostic biomarkers for BC and that ARL11 is the most significant prognostic indicator for BC. ARL11 was initially identified as a low-penetrance cancer gene at the chromosome 13q14.3 that gets continually deleted in some hematopoietic and solid tumors [31,32]. Researchers have demonstrated that ARL11 variants may contribute to the familial risk of various cancer types, such as chronic lymphocytic leukemia, melanoma, breast cancer, prostate cancer, colorectal cancer, and ovarian cancer [16,[33][34][35][36]. More importantly, a current study has found that ARL11 is highly expressed in several kinds of immune cells and may also serve as a positive regulator of extracellular signal-regulated kinase (ERK) signaling in macrophages [37]. However, the specific mechanisms of ARL11 in BC remain poorly understood. Our enrichment analysis demonstrated that ARL11 expression is involved in several immunosuppressing processes in BC, including T cell differentiation and PD-L1 expression and the PD-1 checkpoint pathway in cancer. As tumor immunosuppression is a critical step of preinvasive-to-invasive transition and relates to poor prognoses in BC [38], we speculate that the prognostic role of ARL11 in BC may be due to its function in boosting tumor immunosuppression.
Conclusion
This is the first study to date to characterize the expression patterns, genetic alterations, and prognostic value of ARLs in BC. More importantly, this study finds ARL11 to be a vital prognostic indicator for BC; higher ARL11 expression predicts a worse prognosis. Further functional analysis showed that ARL11's prognostic role may result from its promotion of tumor immunosuppression. This study provides new insight into the exact role of ARL11 in BC and emphasizes its potential role as an innovative predictive biomarker and therapeutic target for BC patients. | 3,243.8 | 2022-11-11T00:00:00.000 | [
"Biology"
] |
Coevolution of the reckless prey and the patient predator
The war of attrition in game theory is a model of a stand-off situation between two opponents where the winner is determined by its persistence. We model a stand-off between a predator and a prey when the prey is hiding and the predator is waiting for the prey to come out from its refuge, or when the two are locked in a situation of mutual threat of injury or even death. The stand-off is resolved when the predator gives up or when the prey tries to escape. Instead of using the asymmetric war of attrition, we embed the stand-off as an integral part of the predator-prey model of Rosenzweig and MacArthur derived from first principles. We apply this model to study the coevolution of the giving-up rates of the prey and the predator, using the adaptive dynamics approach. We find that the long term evolutionary process leads to three qualitatively different scenarios: the predator gives up immediately, while the prey never gives up; the predator never gives up, while the prey adopts any giving-up rate greater than or equal to a given positive threshold value; the predator goes extinct. We observe that some results are the same as for the asymmetric war of attrition, but others are quite different.
Introduction
The war of attrition in game theory is a model of a stand-off situation between two opponents where the winner is determined by its persistence. In the symmetric version of the game, where the costs and benefits for two equally matched opponents are the same, the evolutionarily stable strategy (ESS) is stochastic and given by a negative exponential probability distribution for the length of time till giving-up if the cost of waiting is a linear function of time ( [35], [3], [33]). The exponential distribution is equivalent to both players adopting the same constant giving-up rate and the average pay-off for each player turns out to be zero.
In the asymmetric version of the game, where the opponents assume different unambiguous roles like "owner" and "intruder" or "prey" and "predator", there is no ESS under complete information, but a Nash equilibrium where one player gives up immediately while the other player can choose any giving-up time above a certain threshold value ( [42]). This neutrality of strategy choice for the second player can be resolved if the game is even slightly perturbed, e.g., by introducing the possibility of players making errors in the role identification ( [17], [23]). In some cases, however, errors are unlikely, such as with a stand-off between a predator and its prey.
In this paper, we study a stand-off between a predator and its prey, e.g., when the prey is hiding and the predator is waiting for the prey to come out, or more dramatically, when they are locked in a situation of mutual threat of injury or even death by the predator's teeth and claws or the horns and hooves of the prey. The stand-off is resolved when the predator gives up or when the prey tries to escape. More specifically, the predator may give up the prey, which could then escape and survive the attack, or, the prey may have to eventually give up and, as a consequence, be killed by the predator with a certain probability.
Studies on the ecology of fear include the works by [4] and [5], investigating the adaptive behaviour of foragers in order to minimise the cost of predation by selecting the time spent in a certain habitat and the level of vigilance; the experimental works by [22] and [21], which focus on the cost and benefits of the waiting game in the little egret-goldfish dynamics; finally, we cite the articles by [24] and [25], using the Lotka-Volterra system to model the adaptive behaviour of a predator and its prey when the two maximise their fitness by adapting to the other player's strategy or by habitat selection.
Instead of using the asymmetric war of attrition, we embed the stand-off as an integral part of the predator-prey model of [41] derived from first principles. We use the mechanistic method described by [2] to derive a predator functional response similar to a Holling type II, where the effective attack rate and effective handling time are interpretable functions of the underlying individual dynamics rates. In this way, the event rates describing individual birth and death, prey capture and prey handling, the formation and break-up of a predator-prey pair in a mutual stand-off, they all become explicit parameters of the population model. The costs and benefits of victory or defeat during a stand-off remain implicit as part of the population dynamics, but eventually they come down to births and deaths gained or lost.
In this context, we study the coevolution of the giving-up rates of the predator and the prey using the adaptive dynamics approach ( [36], [15], [14], [16]). As a consequence, our emphasis is somewhat different than in evolutionary game theory. In particular, we focus on the evolutionary dynamics and local stability assuming small mutation steps.
As we consider the predator and the prey locked together in the stand-off, our study differs also from the previous works by [13], [28] and [27] on evolution of prey timidity as measured by the rates of individual prey entering and leaving a refuge or aggressive defensive posture.
The paper is organised as follows. In Section 2, we derive the population equations for the ecological dynamics and the corresponding equilibria. In Section 3, we introduce the adaptive dynamics framework in the current context and, in Section 4, we study the coevolution of the predator and prey giving up rates. In particular, first we discuss the possible evolutionary phase planes in Section 4.1 and later, in Section 4.2, we use the canonical equation of adaptive dynamics to understand the direction of evolution.
Ecological dynamics
We model the scenario where a predator and a prey species interact in the following way. A searching predator (y S ) finds and attacks a foraging prey (x F ) at a rate p, and with probability ν the prey is captured and killed, while the predator enters the handling state (y H ) which includes eating, digesting, resting and giving birth. With the complementary probability 1 − ν the two individuals enter a stand-off state (P ) where the prey may hide in a refuge or show a high level of alertness or aggressiveness. At the same time the predator does not give up the prey but waits a favourable time for attacking. The stand-off is resolved with rate q when the predator gives up, or with rate s when the prey tries to escape. We name s and q the giving-up rates of, respectively, the prey and the predator. With probability 1 − θ the prey successfully escapes and with the complementary probability θ it is captured and killed after all.
The above narration can be summarised with the following fast processes (see also Consider n predator types and m prey types. The predators type j with density y j differ in their giving-up rates q j and in the same way the prey type i with density x i and different giving-up rates s i . We define with P ij the density of pairs with predator type j and prey type i. From the individual processes we derive the differential equations for the fast time population dynamics aṡ with conservation laws for the total predator densities and for the total prey densities Since birth and death are slow processes, on the fast time-scale the total predator densities y j for each type j are constant, i.e.ẏ S j +ẏ H j + i ′Ṗi ′ j = 0. We require the same for the total prey densities x i , i.e. In order to achieve this, we assume that the predator densities are of a smaller order than the prey densities, i.e. y j , y S j , y H j , P ij ≪ x i , x F i for all i, j, so that in the extreme caseẋ i = 0 and x F i = x i (see [2] and A for details on the time-scale separation method).
The functional response f ij of the predator type j for the prey type i is given by the average number of prey type i caught per predator type j per unit of time, i.e.
with y S j , x F i and P ij at the fast time equilibrium. Therefore, by substituting with the unique equilibrium of the fast dynamics (see (81), (82) and (83) in A), we get We can rewrite the functional response in (9) as with effective capture rate β ij and effective handling time h ij defined as If only one prey type x with strategy s and one predator type y with strategy q are present, then the functional response becomes now written with an explicit argument x of the total prey density, and where are the effective capture rate and handling time, respectively.
The functional response in (13) is a Holling type II functional response. The average time spent handling after the capture is h and the predator enters the handling state in two cases: with probability p q q+s ν, a predator gives up stalking the prey and the same prey is captured after a new encounter; with probability p s q+s (θ + ν − θν), the prey gives up hiding from the predator and is caught either after escaping or after a new encounter with a predator. These probabilities define the capture rate β s,q in (13). On the other hand, the encounter and no capture after stalking happens with probability p(1 − ν) and the time spent in the stand-off by the pair is 1 q+s , therefore, when we multiply the capture rate β s,q by the second term 1−ν νq+s(θ+ν−θν) in the handling time h s,q , we get p(1−ν) q+s .
Note that when we take the limit for ν to 1, the prey and predator never enter the stand-off state and the predator attacks are successful with rate p. When this is the case, we obtain the classical version of the Holling type II functional response For the total population dynamics on the slow time-scale we consider a multi-type version of the Rosenzweig-MacArthur model and functional response f ij in (9). The equations for the prey of type i and strategy s i and the predators of type j and strategy q j arė We use the dimensionless quantitiesx i = xi K ,ỹ j = yj K ,t = rt,γ = Kγ r ,d = d r ,β ij = Kβij r ,h ij = rh ij and drop the tildes to obtainẋ When only a single predator with strategy q and a single prey with strategy s are present, the system is multi-species [41] model and the outcomes for the ecological dynamics are well-known (see B for details on the bifurcation analysis). The interior equilibrium is unique and positive if The steady state is asymptotically stable if the slope of the prey zero-growth isocline is negative ( Figure 1, When the interior equilibrium changes its stability, a Hopf bifurcation occurs and the system converges to a stable limit cycle ( Figure 1, Case 2). In case of non-viability of the interior equilibrium, the predator-free equilibrium is the global attractor of the ecological dynamics ( Figure 1, Case 1). Figure 1: Phase plane analysis. Case 1: the interior equilibrium is non-positive and the system converges to the predator-free equilibrium; Case 2: the interior equilibrium is positive but unstable and the system converges to a stable limit cycle, while the predator-free equilibrium is unstable; Case 3: convergence to the positive and stable interior equilibrium, while the predator-free equilibrium is unstable.
Adaptive dynamics
We investigate the coevolution of the rates s and q at which respectively the prey and the predator quit the stand-off state P . The strategy of both the prey and the predator is a giving-up rate, which is a continuous trait that could take any non-negative value. We use the mathematical framework of adaptive dynamics to understand how the traits evolve and whether coexistence of multiple strategies is favoured by natural selection. We refer to [37], [15] and [14] for the definitions of invasion fitness and selection gradient. We rewrite the ecological dynamics in equations (19) and (20) in terms of the resident environment Ė and The instantaneous per capita growth rates G prey and G pred of respectively the prey and the predator are When the environment settled by the monomorphic resident types with strategies s and q is at the equilibrium, the invasion fitness of a mutant prey with strategy s m is given by the long-term average population growth rate Similarly, when a mutant predator type with strategy q m invades the resident dynamics, its invasion fitness is defined by The sign of the invasion fitness decides whether or not a mutant strategy can invade the resident environment and, in case of convergence to the interior equilibrium, the growth rates of the predator and the prey fully determine the outcome of the invasion. By definition, the invasion fitness verifies g prey (s, s, q) = 0 and g pred (q, s, q) = 0 at the ecological equilibrium. Otherwise, when the resident dynamics converges to the periodic attractor, we time-average the population growth rate over the length of the limit cycle with period t s,q g prey (s m , s, q) = 1 t s,q ts,q 0 G prey (s m , q, E(t))dt, The direction of the evolution is defined by the sign of the selection gradient, i.e. the fitness derivative with respect to the mutant trait and evaluated at the resident strategy. Here, we derivate the fitness of the prey with respect to the mutant strategy s m and similarly for the predator. The final result for the selection gradient of respectively the prey and the predator is Analogously to the invasion fitness, the selection gradient in a cycling resident population is defined by the timeaverage of the expressions in (35) and (36) over the length of the limit cycle.
A singular strategy is a pair of values (s * , q * ) for the coevolving strategies s and q such that it is an intersection point for the isoclines When the singularity (s * , q * ) is a local maximum of the invasion fitness, that is the second derivatives then the evolutionary singular strategy cannot be invaded by any mutant prey or predator trait (see the definition of ESS by [33], later extended to asymmetric games and coevolutionary ESS, for example see [49]). Conditions for (s * , q * ) to be an evolutionary attractor are given in D.
The canonical equation of adaptive dynamics (see [7], [6]) describes the rate of change of the traits s for the prey and q for the predator with the following relationṡ where k prey (s, q) and k pred (s, q) are scaling non-negative coefficients rather difficult to derive and which take into account the influence of mutation. In particular, k prey (s, q) = 1 2 µ prey (s)σ 2 prey (s)x E (s, q), where µ prey (s) is the mutation probability per birth event, σ 2 prey (s) is the variance of the mutation step distribution and x E (s, q) the effective population size (the resident equilibrium population size if we assume the resident population at equilibrium). The same definition holds for k pred (s, q) = 1 2 µ pred (q)σ 2 pred (q)y E (s, q).
We refer to the works by [40] and [38] for the extension of the canonical equation to a periodic environment.
In particular, the drifts given by [40] differ of a factor 1 2 from the original canonical equations by [7], as this is embedded into the definitions for the effective population sizes: the coefficient for the prey trait equation is now k prey (s, q) = µ prey (s)σ 2 prey (s)x E (s, q), and similarly for the predator trait, k pred (s, q) = µ pred (q)σ 2 pred (q)y E (s, q).
In the expressions for k prey (s, q) and k pred (s, q), the terms µ prey (s) and σ 2 prey (s), µ pred (q) and σ 2 pred (q) are additional model parameters and do not follow from the population dynamics. Conversely, we define the effective population densities with the following ratios between time averages over one complete population cycle The terms b prey and b pred are the explicit birth terms in the prey and predator equations, while d prey and d pred refer to the corresponding death terms. In the same way, the selection gradients in equations (40) and (41) are averaged over the length of the limit cycle.
.1 Evolutionary dynamics: invasion fitness and selection gradient
We assume that the resident prey and predator populations are monomorphic most of the time. When an invasion occurs, the mutant population is sufficiently rare compared to the resident species and we can apply time-scale separation between the ecological dynamics on the fast time-scale and the evolutionary dynamics on the slow time-scale. Therefore, we assume that the resident population has attained an ecologically stable attractor when the mutant comes along. After invasion, the population evolves towards an evolutionary attractor through a sequence of trait substitutions. During the process of directional evolution, the population remains monomorphic except for the infinitely short time straight after invasion.
We suppose the monomorphic prey population with strategy s and the monomorphic predator population with strategy q. The analytical results collected below are displayed in the (s, q)-planes of Figure 2 and Figure 3.
The invasion fitness of a mutant prey with strategy s m in the constant environment defined by the resident populations is with (x,ŷ) as defined in (21), β s,q and h s,q as defined for (13) and Note that β sm,q is the only factor depending on the mutant trait s m , as the ratioŷ 1+βs,qhs,qx represents the fraction of resident searching predators to which the mutant prey is subjected.
Similarly, the invasion fitness for a mutant predator with strategy q m is given by withx as defined in (21) and The evolutionary change is determined by the selection gradient. The selection gradient for the values of trait s is given by the derivative of the invasion fitness (44) In the same way, we define the gradient with respect to q m as The unique predator isocline follows from dg pred (s, q) = 0 and is the vertical line By imposing dg prey (s, q) = 0, we obtain the prey isoclines Note that the prey isocline q 2 is a straight line with slope p The intersection of the isoclines (51) and (52) gives the unique singular point (s * , q * ) = d (γ−dh)θ , 0 (with respect to both traits s and q) when the ecological dynamics attains its interior equilibrium. In particular, s * is positive if and only if γ > dh.
Thus, high giving-up rates (and shorter stand-offs) for the predator are advantageous if the prey is abundant, and, viceversa, low giving-up rates (and longer stand-offs) are better when the prey is rare. In other words, when the prey density is above the critical value sθ pν , finding a new prey is fast relative to waiting out the stand-off, which, in this scenario, is a waste of time for the predator.
The above conditions on the sign of the predator gradient can be reformulated in terms of the singularity s * . We check the sign of (pνx − sθ) with full expression forx and q = 0. We solve for s and obtain that the predator giving-up rate switches downward evolution to upward evolution, creating a bang-bang situation, once the prey strategy passes the singular point s * . The singularity (s * , 0) is an unstable saddle and the conditions for convergence stability do not apply here as well as we can exclude evolutionary branching. Furthermore, the points (s, 0) with s > s * are (non-isolated) boundary attractors with respect to s, as they lay on the prey isocline q 1 .
In C we prove the main result on the predator gradient when the resident dynamics is at the interior equilibrium, i.e. the biological explanation for the directional evolution of the predator trait. We summarise this with the following Proposition When the population is cycling, we use standard numerical methods to compute the invasion fitness and selection gradient. Specifically we use the procedure NDSolve of the software Mathematica to numerically integrate the population equations in (19) and (20) for one prey and one predator. We use the Poincaré map to evaluate convergence of the solutions. In particular, we collect the data until the distance between x and R(x), with R denoting the return map, is less than a small error tolerance. The length of the limit cycle is measured by the time interval between x and R(x). Finally we numerically integrate the mutant's growth rate over the limit cycle as indicated in equations (33) and (34).
We find 10 different configurations for the evolutionary phase plane with respect to the giving-up rates s and q, which are given both in Figure 2 and Figure 3. In Scenario 1-6 we vary the parameter ν ∈ [0, 1). We distinguish the areas where the interior equilibrium of the ecological dynamics is non-positive (and the resident population attains the predator-free equilibrium), or positive and stable, or positive and unstable (and the resident population converges to a stable limit cycle). Note that the extinction boundary never coincides with the Hopf bifurcation line (see the top-right panel which zooms around the Hopf and transcritical bifurcations and B). Finally, we give the prey isoclines, the predator isoclines and the unique singularity. By continuity, the vertical isocline extends to the region of convergence to the stable limit cycle in Scenario 2 and 3, as well as the prey isocline q 1 = 0 is still present in the cycling region. Once in the cycling region, the predator isocline is no longer vertical, but its slope is determined by the periodic resident population.
We fix ν = 0.5 and we obtain Scenario 7 and 8 by varying the parameter θ ∈ (0, 1]. In particular, when θ = 0, the predator isocline in (51) is at infinity, while it becomes feasible for parameter values greater than 0 (Scenario 8). In Scenario 7, we show that the predator isocline can also fall on the right hand side of the cycling region for small values of θ.
In the same way, we fix ν = 0.01 and we check the dynamics for different values of θ ∈ (0, 1]. We add Scenario 9 and 10 to the list of possible evolutionary phase planes: the prey isocline has positive slope and the predator isocline appears in the region of non-viability for the interior equilibrium. In Scenario 10 there are no values for (s, q) such that the interior equilibrium is positive and, therefore, the dynamics only converges to the predator-free equilibrium.
Note that if ν = 1 and the stand-off never occurs, we obtain the degenerate case where the prey and the predator gradients in (49) and (50) are zero for every value of s and q. On the other hand, when θ = 0 and the stand-off never ends with prey capture, the prey gradient is zero for every s and q, while the predator gradient is positive everywhere.
Evolutionary dynamics: the canonical equation of adaptive dynamics
We use the canonical equations in (40) and (41) and give the stream plots of the prey drift k prey (s, q) ∂gprey (sm,q) ∂sm sm=s and the predator drift k pred (s, q) ∂g pred (s,qm) ∂qm qm=q to understand the direction of the evolution of the traits s and q. In particular, we fix the mutation probability per birth event µ prey (s) and µ pred (q), and the mutation variance σ 2 prey (s) and σ 2 pred (q) (note that for the shape of the orbits only the relative values matter).
In the cycling region of the (s, q)-plane, we plot the drifts of the canonical equations in (40) and (41) with effective population sizes in (42) and (43). In particular, this definition forces us to split the right-hand side of the population equations into explicit birth and death terms. How to do this is trivial, except for the prey growth term x(1 − x). One possibility is x(1−x) = 2x−x(1+x), with 2x being the birth term and x(1+x) modelling (density-independent) natural mortality and (density-dependent) death due to intraspecific competition between the prey individuals. Therefore the birth and death per capita rates that we choose for the numerical analysis are b prey = 2 (57) The resulting dynamics is described in Figure 2 and Figure 3, which differ in the speed of evolution for the prey trait, σ 2 prey (s).
Note that in Scenario 10 the dynamics converges only to the predator-free equilibrium and there is no directional evolution. Furthermore, we do not give the case ν = 1, where both selection gradients are null and so are the drifts. The case θ = 0 is also not displayed: here the prey drift is zero for every s and q (as the prey gradient is zero) and all the trajectories are vertical and converging to high levels of q if the prey isocline q 2 has negative slope, some of them converging to the extinction boundary otherwise.
In Scenario 1, 7 and 8 in Figure 2 the vertical isocline falls on the right-hand side of the cycling region. The pair of traits (s, q) will approach either the boundary attractors with s > s * and q = 0, or eventually meet the vertical axis s = 0 and evolve towards values of q ≫ 0. In particular, the prey drift is negative everywhere above the prey isocline q 1 , while the predator drift changes sign when the orbits cross the predator isocline. Note that on the isocline q 2 the prey drift is zero, given that the prey gradient ∂gprey(sm,q) ∂sm sm=s = 0. At the same time, dq dt becomes zero in the region of convergence to the predator-free equilibrium where the predator effective population size is y E (s, q) = 0.
In Scenario 2 and 3 in Figure 2, the vertical isocline falls into the region of convergence to the stable limit cycle. Some trajectories of the evolutionary dynamics enter the cycling region from the right-hand side of the predator isocline. However, when the population is cycling, the predator drift does not change its behaviour, as it is negative on the right-hand side of the predator isocline, and positive otherwise.
Till now we have observed two dynamics, which occur depending on the model parameters and the initial conditions. In terms of quitting times, convergence to the boundary attractors with s > s * and q = 0 reads like the predator never gives up, and the prey gives up after an exponentially distributed amount of time. Otherwise, convergence to the vertical axis and high values of q results in the case when the predator gives up immediately, and the prey never gives up.
In Scenario 4, 5 and 6 in Figure 2 we observe a new behaviour in addition to the ones described above: some trajectories of the evolutionary dynamics will eventually end into the extinction boundary, which is here attracting. As the predator density declines, selection decreases, because of continuity of the fitness gradient as a function of the environment and the traits s and q, and, in the absence of the predator, it becomes neutral (i.e. there is no longer directional selection). The orbits which end on the extinction boundary become almost horizontal, as dq dt becomes very small. The points on the prey isocline are also (non-isolated) attractors for the prey trait s and, as a consequence, the pair of parameter traits stops evolving when it encounters the isocline.
Finally, in Scenario 9 in Figure 2, both the prey and the predator drifts are negative and, as a consequence, directional evolution is always towards the points with s > s * and q = 0.
As directional selection is determined by the relative speeds of evolution, the extinction boundary is repelling only if its slope is negative enough compared to the one of the stream lines. When we increase the speed of evolution of the prey trait s, we observe that convergence to the extinction boundary becomes a more likely outcome, as shown in Figure 3. In other words, a sort of evolutionary murder occurs, since evolution in the s-direction determines the extinction of the predator species. Figure 2: Evolutionary phase planes for ν ∈ [0, 1] and θ ∈ (0, 1]. Dark grey region: non-viability for the resident interior equilibrium. Light grey region: viability and asymptotic stability for the resident interior equilibrium. White region: unstable resident interior equilibrium and stable limit cycle. Black lines: prey isoclines. Dotted line: predator isocline. Thick point: the unique singularity. Blue arrows: orbits of the system of canonical equations (µ prey (s) = 1, σ 2 prey (s) = 1, µ pred (q) = 1, σ 2 pred (q) = 1). Figure 3: Evolutionary phase planes for ν ∈ [0, 1] and θ ∈ (0, 1]. Dark grey region: non-viability for the resident interior equilibrium. Light grey region: viability and asymptotic stability for the resident interior equilibrium. White region: unstable resident interior equilibrium and stable limit cycle. Black lines: prey isoclines. Dotted line: predator isocline. Thick point: the unique singularity. Blue arrows: orbits of the system of canonical equations (µ prey (s) = 1, σ 2 prey (s) = 10, µ pred (q) = 1, σ 2 pred (q) = 1).
Conclusions
Using the mathematical framework of adaptive dynamics, we studied the coevolution of the giving-up rates q and s of respectively the predator and the prey in a stand-off situation after a failed attack. We found three qualitatively different long-term evolutionary outcomes depending on the model parameters as well as the initial conditions of the strategy dynamics: (i) the predator gives up immediately (i.e., q = ∞), while the prey never gives up (i.e., s = 0); (ii) the predator never gives up (i.e., q = 0), while the prey adopts any giving-up rate greater than or equal to a given positive threshold value; (iii) the predator goes extinct.
Concerning the transient phase of the strategy dynamics, we found: (iv) the giving-up rate s of the prey always decreases unless the predator has gone extinct or has adopted a givingup rate q = 0, in which cases s has become selectively neutral; (v) the giving-up rate q of the predator decreases for high values of the giving-up rate s of the prey, but it increases once s has become less than a given threshold value.
Graphical examples of the various scenarios are given in Figure 2 and Figure 3. Note that results (i) and (ii) are theoretical; in practice, the prey refuses to give up before the predator gives up in (i), and similarly for the predator in (ii). Furthermore, these are fast time-scale processes and the population will not go extinct, as birth and death happen on the slower time-scale where even an arbitrarily long time is negligibly short if measured on the slow time-scale.
The threshold values in (ii) and (v) are the same. For a constant (i.e., non-cycling) population, we have shown (see Proposition 4.1) that if s is equal to this threshold, denoted by s * , then the expected time till the next successful prey capture for a searching predator and for a predator in a stand-off are exactly the same. For s > s * the expected time till the next successful prey capture for a searching predator is longer than for a predator in a stand-off, whereas for s < s * the situation is reversed. Therefore, if s > s * , there is a selective advantage for the individual predator to adopt an even lower giving-up rate, whereas if s < s * , then the advantage is for the individual predator with a higher giving up rate. In other words, evolution minimises the expected time till the next prey capture, which is equivalent to maximising the rate of prey capture, which again is equivalent to maximising the predator's per capita birth rate.
The above explains the results (i), (ii) and (v), at least for a constant population. The same argument holds for a cycling population as well, except that the threshold value of s is no longer a constant but depends on the value of q (see Figure 2, Scenario 1, 2 and 3), and, moreover, that the expected time till the next successful prey capture also involves averaging over the population cycle. Figure 2 also shows that (i) is the more likely result if the predator already starts with a high giving-up rate or evolves slower than the prey, while we get (ii) more often if the prey starts with a high giving-up rate and the predator evolves faster than the prey.
The costs or benefits for the prey have a completely different origin than for the predator. To understand the result (iv) we observe from the invasion fitness of the prey in equation (44) that the giving-up rate s m of the mutant prey affects the invasion fitness of the prey through the effective capture rate β sm,q in the numerator of the predator's functional response, while the denominator only depends on the resident strategies. Since β sm,q is an increasing function of s m , it is always beneficial for the individual prey to have a lower giving-up rate. In other words, evolution minimises the predation-related per capita death rate of the prey, irrespectively of the resident strategies.
The predator goes extinct if the prey capture rate for a searching predator is higher than for a predator in a stand-off. However, as we observe in scenarios 3, 4, 7 and 8 of Figure 2, the predator cannot take advantage of this, because the giving-up rates of both the predator and the prey are low, so that the two stay too long in the stand-off situation. Alternatively, the predator goes extinct if the prey capture rate in the stand-off is higher than when searching, but the predator again cannot take advantage of this. In this case, the giving-up rate of the predator is high, so that the stand-off is left too early (Figure 2, scenarios 3, 4, 7 and 8). Result (iii) happens when an evolutionary trajectory crosses the extinction boundary in the (s, q)-plane. From the direction of the evolutionary trajectories it can be seen in Figure 2 that it is always the evolutionary change in the s-direction, not in the q-direction, that drives the population over the extinction boundary. It is therefore correct to say that the prey drives the predator to extinction. It also follows that extinction becomes more likely if the prey evolves faster than the predator, so that the horizontal component of the coevolutionary velocity vector becomes dominant (Figure 3).
In the context of evolutionary game theory, the asymmetric war of attrition has a continuum of Nash equilibria where one player quits immediately while the other is prepared to wait any time at a cost that is not less than the value of the contested object ( [32], [34], [42], [23]). As a consequence of our model being derived from individual level interactions with exponentially distributed event times, quitting after a fixed positive time cannot be expressed in terms of a constant giving up rate. In that sense, the Nash equilibria of the war of attrition in game theory cannot be reproduced here. At most we can express evolutionary outcomes in terms of average giving-up times, which are the reciprocal of the corresponding giving-up rates. With that in mind, only result (i) is similar to the Nash equilibria of the asymmetric war of attrition.
All other results are enigmatic for the present model. Result (ii) is the opposite of the Nash equilibrium: one player never gives up (as apposed to immediately giving up), while the other player can adopt any giving-up rate greater than a given threshold rate, which is the same as having an average giving-up time that is less than (as opposed to greater than) a given positive threshold time. Results (iii), (iv) and (v) are about (or a direct consequence of) strategy dynamics by selection and small mutation steps and dependence on initial conditions. This is the realm of the adaptive dynamics approach and such results cannot be reproduced by a game theoretical analysis because of its focus on evolutionary stability and Nash equilibria.
While our results differ from the asymmetric war of attrition game partly because of the chosen methods of analysis, more important, however, is the difference in modelling approaches. The war of attrition is formulated without ecological context, and the costs and benefits for the players are predetermined given their respective strategies.
Embedding the game in a population dynamical model such as the replicator equation by [19] does not add any new ecology. In contrast, our model is derived from a network of individual-level interactions and processes including the formation and break-up of a predator-prey pair engaged in the mutual stand-off. As a consequence, the evolutionary game is an integral part of the full ecology and the predator-prey dynamics. The costs and benefits of different strategies are implicit and emerge from the dynamics rather than being predetermined. Only after the model was formulated and analysed, it emerged that the costs and benefits for the predator can be measured in terms of the per capita birth rate, while for the prey they are measured in terms of the per capita death rate due to predation. This may seem obvious, but only so in retrospect, after the model has been formulated and analysed. And as seen above, the results are not the same.
Note that the time scale separation is another crucial step in our modelling approach. By the singular perturbation theory, we obtain qualitatively similar results if we relax the assumptions of time scale separation a little bit. By relaxing the time scale separation assumptions even further, the system becomes multi-dimensional and has potentially more complex dynamics, including chaotic behaviour and multiple attractors. In this scenario, we do not know to what extent our conclusions still apply.
An interesting and quite natural extension of the model would include the possibility that the stand-off escalates into a fight where both the predator and the prey may get injured. The costs and benefits of different giving-up times then would not only come from changes in the per capita birth rates (in case of the predator) and predation-related death rates (in case of the prey) as in the present model, but also from the effects of injury on the fecundity and mortality rates of both the predator and the prey. Although it is not obvious how the interaction between the stand-offs and the escalated fights would affect the evolutionary outcomes, using our modelling approach (from individual-level processes to population dynamics) makes the evolutionary problem easy to formulate and straightforward to analyse with the adaptive dynamics framework.
A On the time-scale separation for the ecological dynamics
The complete list of equations for the population dynamics is with the conservation laws in (5) and (6).
We separate the dynamics into two time-scales by introducing the small and dimensionless scaling parameter ε > 0: We give the slow-fast equations using the scaled variables and parameters: We introduce the short timet = ε −1 t, let ε → 0 and drop the tildes to obtain the equations for the fast dynamics We use the conservation law for the total predator density to reduce the equations for y S j , y H j and P ij to only two equations and we set x F i = x i . We obtain the equations for the quasi-equilibrium from (1) and (4): We conclude that the system has a unique quasi-equilibrium of the fast dynamics, When the denominator of (82) and (83) is positive, then the quasi-equilibrium is feasible, i.e. exists and is positive. Furthermore we use linear stability analysis to check the stability conditions. The elements of the Jacobian matrix corresponding to the fast system of equations in (79) and (80) are constant and do not depend on the equilibrium, thus computing the trace and determinant and their signs is straightforward. We conclude that under the feasibility condition, the trace and determinant are respectively negative and positive, hence the quasi-equilibrium is hyperbolically stable. To prove global stability we use the Poincaré-Bendixon theorem and the Bendixon-Dulac theorem. We can exclude the existence of a limit cycle, because the trace is negative everywhere. Furthermore, since there is only one equilibrium, it must be the ω-limit of every orbit of the fast dynamics.
B On the equilibrium for the ecological dynamics
We consider the differential equations in (19) and (20) for a single prey x and a single predator y with strategies respectively s and q and β s,q and h s,q as defined for (13), y = γ β s,q xy 1 + β s,q h s,q x − dy.
longer present and the dynamics converges to the predator-free equilibrium.
The determinant and the trace of the community matrix J(x, y) x=x,y=ŷ for the ecological dynamics determine the stability of the equilibrium: trJ(x, y) x=x,y=ŷ = 1 − d − 2x + β s,q (γx −ŷ + γh s,q β s,qx 2 ) (1 + h s,q β s,qx ) 2 A Hopf bifurcation occurs when the trace evaluated at the interior equilibrium is zero. In order to verify that the Hopf bifurcation do not coincide with the transcritical bifurcation, we look at the (s, q)-planes in Figure 2 and Figure 3: it is enough to check that the two curves cannot intersect with the s-axis in the same point. By solving for different parameters the equation for the intersection points of, respectively, the Hopf bifurcation line and the extinction border with the axis q = 0, we obtain that under no conditions on the parameter values the two bifurcations coincide.
C On the sign of the predator gradient
In this Section we discuss the proof of Proposition 4.1. The conditionx = sθ pν in (55) is equivalent to (νpx) −1 = (θs) −1 and compares the rate of the two events modelling successful prey capture by the predator (here given in the form of monomolecular reactions with a constant or prey density-dependent transition rate) with x F = x at the fast time equilibrium y S νpx − −→ y H attack and prey capture, P θs − → y H prey ends stand-off and is captured.
In particular, let's define with ES the expected time till prey capture for a predator starting in state y S and with EP the expected time till prey capture for a predator starting in state P . Then, following the definitions, we obtain the system of equations for ES and EP Solutions for ES and EP are the comparison of the two (ES = EP ) leads to condition (55). Asx = sθ pν can be reformulated in terms of s * , we conclude that if s = s * , then ES = EP . In the same way, starting from the conditions in (54) and (56), we get that if s > s * , then ES > EP , and if s < s * , then ES < EP .
D On the canonical equation and different definitions of stability
We recall the canonical equations in (40) and (41). For the purpose of this study, below we only give conditions for stability of a singularity in the monomorphic resident population and we assume that the coefficients k i (s, q), i = prey, pred are differentiable. We can define an evolutionary attractor by analysing the trace and determinant of the Jacobian matrix of the evolutionary dynamics in (40) and (41) evaluated at the singularity. The elements of J(s, q) s=s * ,q=q * are J 11 = k prey (s, q) ∂ 2 g prey (s m , q) ∂s 2 m + ∂ 2 g prey (s m , q) ∂s∂s m = k prey (s, q)A 11 , J 12 = k prey (s, q) ∂ 2 g prey (s m , q) ∂s m ∂q = k prey (s, q)A 12 , J 21 = k pred (s, q) ∂ 2 g pred (s, q m ) ∂q m ∂s = k pred (s, q)A 21 , By the Routh-Hurwitz criterion, a singularity (more generally, a singular coalition of one-dimensional traits) is convergence stable if det(J) > 0 and tr(J) < 0. A singularity (s * , q * ) is weakly convergence stable if there exist a given strictly positive diagonal matrix with diagonal entries k i (s, q) > 0, i = prey, pred such that k prey (s, q)A 11 + k pred (s, q)A 22 < 0, For the present model, we have weak stability if and only if as we suppose either k prey (s, q) or k pred (s, q) large enough to make the other term in (107) negligible. Strong convergence stability is obtained for any strictly positive diagonal matrix with diagonal entries k i (s, q) > 0, i = prey, pred and Note that strong convergence stability implies weakly convergence stability. Furthermore, we define a strategy (s * , q * ) totally stable if it is stable for the differential inclusions In particular, (s * , q * ) is totally stable if and total stability implies strong stability of the singular strategy (see also [31]). | 10,789.8 | 2021-03-24T00:00:00.000 | [
"Mathematics"
] |
PERMUTOOLS: A MATLAB PACKAGE FOR MULTIVARIATE PERMUTATION TESTING
Statistical hypothesis testing and effect size measurement are routine parts of quantitative research. Advancements in computer processing power have greatly improved the capability of statistical inference through the availability of resampling methods. However, many of the statistical practices used today are based on traditional, parametric methods that rely on assumptions about the underlying population. These assumptions may not always be valid, leading to inaccurate results and misleading interpretations. Permutation testing, on the other hand, generates the sampling distribution empirically by permuting the observed data, providing distribution-free hypothesis testing. Furthermore, this approach lends itself to a powerful method for multiple comparison correction — known as max correction — which is less prone to type II errors than conventional correction methods. Parametric methods have also traditionally been utilized for estimating the confidence interval of various test statistics and effect size measures. However, these too can be estimated empirically using permutation or bootstrapping techniques. Whilst resampling methods are generally considered preferable, many popular programming languages and statistical software packages lack efficient implementations. Here, we introduce PERMUTOOLS, a MATLAB package for multivariate permutation testing and effect size measurement.
Background Hypothesis testing
For over a century, researchers have relied on parametric statistical procedures for conducting hypothesis testing, such as the famous Student's t-test [Student, 1908].However, parametric testing was developed out of the necessity to make inferences about the null distribution, as it was impractical to generate it empirically.Today, it is possible to do so using permutation tests.Permutation tests work by permuting the observed data in an appropriate manner to compute the empirical distribution of the test statistic of interest -known as the permutation distribution -which approaches the null distribution [Fisher, 1935].From the permutation distribution, we can estimate the confidence interval (CI) of the test statistic by computing the corresponding percentiles (e.g.2.5% and 97.5% percentiles for 95% CI).We can also estimate the probability of observing such a result by chance (i.e. the p-value) by calculating the proportion of the permutation distribution that is greater than or equal to the magnitude of the test statistic (Fig. 1).The more permutations generated, the more accurate the estimate.Typically, permutations in the order of several thousand are required to obtain reliable p-values, which is computationally trivial for modern computers [Ernst, 2004].
Permutation testing can be applied to any statistical test.As no assumptions are made about the shape of the underlying distribution, permutation tests provide distribution-free, nonparametric hypothesis testing without the need for rank-transformations [Holt and Sullivan, 2023].For independent samples, permutation tests have been shown to be relatively insensitive to differences in population variance when samples of equal size are used [Murphy, 1967, Groppe et al., 2011b].Moreover, permutation tests have been shown to be more accurate than parametric tests in fields such as biomedical research [Ludbrook and Dudley, 1998], and yield more reliable p-values and thus are more likely to produce replicable results [Noguchi et al., 2021].Despite the widespread use of significance thresholding, it is recommended to treat p-values as a continuous measure [McShane et al., 2019].However, their continuous value does not indicate the strength of an effect, only its likelihood.PERMUTOOLS offers permutation testing and confidence interval estimation for a range of statistical tests, including the ANOVA (one-way, two-way), t-test (one-sample, pairedsample, two-sample), F-test (two-sample), Z-test (one-sample), and correlation test (Pearson, Spearman, rankit).It does not output dichotomous test results based on an arbitrary significance threshold to discourage this practice.
Correcting for multiple comparisons
When conducting multiple hypothesis tests simultaneously, there is an increased risk of false discoveries or type I errors.However, many of the traditional correction methods used to control family-wise error rate (FWER) tend to be overly conservative, resulting in increased type II errors (e.g.Bonferroni correction, Holm-Bonferroni method).Researchers are often faced with the delicate task of controlling the trade-off between type I and type II errors.In the biomedical field, researchers tend to preference controlling for type I errors, because the consequences of false positives can be detrimental, e.g. the introduction of an ineffective new therapy or treatment [Ludbrook and Dudley, 1998].This inherent bias, along with the use of overly conservative correction methods, can be limiting in the pursuit of scientific progress.
Another advantage of permutation tests is that they can utilize a powerful technique for correcting for multiple comparisons -known as max correction -which is less prone to type II errors than conventional correction methods [Blair andKarniski, 1993, Westfall andYoung, 1993].Max correction (also known as t max correction [Blair et al., 1994] or joint correction [Boca et al., 2014]) works as follows: on each permutation of the data, a separate test statistic is computed for each variable and the maximum absolute value (or most extreme positive or negative value) is taken across all variables.Repeating this procedure thousands of times produces a single permutation distribution against which the actual test statistic is compared (Fig. 1).Thus, taking the maximum across more variables naturally produces a more conservative permutation distribution.This highly intuitive approach provides strong control of FWER, even for small sample sizes [Gondan, 2010, Groppe et al., 2011a, Rousselet, 2023].Max correction has been used in various scientific disciplines, including the study of electrophysiological data [Blair andKarniski, 1993, Groppe et al., 2011a,b] and human behavioural data [Gondan, 2010, Shaw et al., 2020, Crosse et al., 2022].PERMUTOOLS automatically applies max correction to multivariate data, unless specified otherwise.
Measuring effect size
Effect size measurement is an equally important part of inferential statistics [Hentschke and Stüttgen, 2011].Today, most scientific journals require that authors report the size of an effect, and not just its dichotomous existence.A common effect size measure used in research is the standardised mean difference, known as Cohen's d [Cohen, 1969].Standardised effect sizes have the advantage of being metric-free, meaning that they can be directly compared across different studies.For independent samples, Cohen's d uses the pooled standard deviation when they are assumed to have equal variances, and an unpooled estimate when this cannot be assumed.For independent samples with significantly different variances, an estimate based on the control sample's variance can be used, known as Glass' delta [Glass, 1976].In the case of ordinal data, an alternative formulation known as Cliff's d should be used [Cliff, 1993].PERMUTOOLS gives the option to implement any of the above standardised effect size measures, as well as several unstandardised measures.
It is also typical to report the confidence interval of an effect size.Whilst they have traditionally been calculated using parametric methods, these too can be estimated empirically by generating the sampling distribution using a resampling procedure known as bootstrapping.Bootstrapping is fundamentally different to permutation testing in that it resamples with replacement, whereas permutation testing resamples without replacement.PERMUTOOLS uses an efficient bootstrapping algorithm that is optimised for multivariate data analysis.
Correcting for sample size
As with all statistical tests, the larger the sample size, the more accurately it can describe the population.However, in certain fields such as cognitive science, researchers are often required to work with relatively small (n < 50) sample sizes due to the limited availability of test subjects or other logistical study constraints.This can hinder the researcher's ability to measure the true, unbiased size of an effect.Despite the advantages of standardised effect size measures, metrics such as Cohen's d have been shown to have an upwards bias of up to about 4% for sample sizes of less than 50.This bias is somewhat reduced by using the pooled weighted standard deviation of the samples [Hedges, 1981].Additionally, a bias correction factor can be applied to the effect size measure and CI, which is approximately equal to 1 − 3/(4n − 9) [Hedges and Olkin, 1985].When this correction factor is applied, it is usual to refer to the effect size as Hedges' g.PERMUTOOLS automatically applies bias correction when calculating Cohen's d and Glass' delta (and their CIs), unless specified otherwise.
Statement of need
Whilst resampling methods have gained widespread acceptance, many popular programming languages, including MATLAB, lack efficient implementations or have not yet fully integrated them into their core statistical packages.This hinders the adoption of these robust and versatile techniques by researchers, therefore limiting the quality and reliability of quantitative research.To address this need, PERMUTOOLS provides a comprehensive set of functions in the MATLAB programming language for conducting resampling-based inferential statistics that are easy to use and computationally efficient.Moreover, it is optimised for dealing with large, multivariate datasets and offers powerful methods for correcting for multiple comparisons and sample size, making it an invaluable tool for researchers across various fields of quantitative research.PERMUTOOLS offers a range of new features that distinguish it from existing statistical software packages.Some of the key features are described below.
Key features
• Optimised Resampling Algorithms: PERMUTOOLS utilizes efficient implementations of resampling algorithms that are optimised for multivariate data, ensuring efficient processing of even large datasets with negligible compute times.
• Multiple Comparison Correction: PERMUTOOLS implements the powerful max correction method to adjust p-values and CIs for multiple comparisons when dealing with multivariate data, reducing the risk of both type I and type II errors.
• Sample Size Correction: PERMUTOOLS applies a bias correction factor to the relevant standardised effect size measures and their CIs to adjust for any inherent inflation due to sample size, ensuring less biased estimates for small (n < 50) samples.
• Multivariate Processing: PERMUTOOLS handles multivariate datasets with ease, allowing for multiple tests to be performed simultaneously, as well as the option to perform pairwise comparisons between every combination of variables in a matrix (e.g.correlation matrix).
• Continuous Framework: PERMUTOOLS does not output test results under the traditional dichotomous decision-based framework (i.e.H = 0, 1).Instead, it outputs various quantitative measures, encouraging researchers to interpret their results in a continuous and holistic manner.
Using PERMUTOOLS
The PERMUTOOLS functions are designed to mimic the API of the equivalent parametric functions in MATLAB, with the addition of the prefix 'permu' at the beginning of the function name.For example, to conduct a permutation test based on the t-statistic, the usual function ttest() becomes permuttest() etc.The input and output arguments are the same as before with the exception that the first output variable is always the test statistic (and not a dichotomous test result).An additional output variable containing the sampling distribution is included, as well as additional resampling-related input arguments that are described in the help documentation of each function.Unlike many existing statistical software packages, PERMUTOOLS uses a consistent input/output argument framework across all its functions to optimise usability.
Example
The following example illustrates typical usage of the PERMUTOOLS toolbox.Here, we compare the means of two independent multivariate samples using permutation tests based on the t-statistic with max correction, and contrast the results with the equivalent parametric tests in MATLAB (i.e.two-sample t-tests).We then measure the associated effect sizes based on the bias-corrected standardised mean difference (i.e.Hedges' g), as well as the corresponding 95% CIs using both a bootstrapping and parametric approach.
First, we generate random multivariate data for two independent samples X and Y.Both samples have 20 variables, each with a mean value of approximately 0, except for the first 10 variables of Y which have a mean value of approximately −1.Each variable contains 30 observations.
% Generate random data rng (42) ; x = randn (30 ,20) ; y = randn (30 ,20) ; y (: ,1:10) = y (: ,1:10) -1; Because we generated the random multivariate data from the same (normal) distribution, we can assume that their variances are approximately equal.If we could not assume this, we would first conduct a two-tailed test of variance based on the F-statistic using PERMUTOOLS' permuvartest2() function.Thus, we can proceed using the standard Student's t-statistic (as opposed to Welch's t-statistic).The two-sample permutation tests are implemented using PERMUTOOLS' permuttest2() function and the equivalent parametric tests are implemented using MATLAB's ttest2() function.By default, max correction is applied to the permutation tests.
% Run MATLAB ' s parametric effect size analysis d3 = zeros (1 ,20) ; ci3 = zeros (2 ,20) ; for j = 1:20 stats3 = meanEffectSize ( x (: , j ) ,y (: , j ) , ' effect ' , ' cohen ' , ' paired ' ,0) ; d3 ( j ) = stats3 .Effect ; ci3 (: , j ) = stats3 .ConfidenceIntervals '; end % Run PERMUTOOLS ' bootstrapped effect size analysis [ d4 , ci4 , stats4 ] = booteffectsize (x ,y , ' effect ' , ' cohen ' , ' paired ' ,0) ; To compare the results of our parametric and resampling analyses, we next plot some of the statistics as a function of variable (Fig. 2).The effect of max correction is clearly visible on the test statistic CIs and the resulting p-values, which are consistently more conservative than those of the uncorrected parametric tests (Fig. 2, left, middle).Importantly, spurious effects observed in the parametric case (e.g.variable 20) did not survive max correction in the permutation tests.In the effect size analysis, the 95% CIs estimated via bootstrapping appear to approximate the parametric CIs reasonably well (Fig. 2 From the above statistical analysis, we can examine the results of each individual pairwise comparison between X and Y, which are contained in the variables output by the PERMUTOOLS functions.Taking the first variables of X and Y as an example, we see that the mean of X 1 (M = −0.06,SD = 0.91) was significantly greater than the mean of Y 1 (M = −1.09,SD = 0.86), even after adjusting for multiple comparisons (t(58) = 4.49, p = 0.0008, Hedges' g = 1.14, 95CI [0.68, 1.72]).We recommend always reporting the absolute values of the tests as shown here, in particular, with the effect size and CI included.
Future work
PERMUTOOLS is under active development, and new functions and features are constantly being added to it as needed.Future work aims to expand the toolbox to provide new permutation-based tests, including repeated measures ANOVAs (one-way, two-way), and multi-way ANOVAs (n-way), as well as expanding the functionality of existing ANOVA tests to be able to deal with unbalanced and multivariate samples.Whilst permutation tests have been shown to outperform nonparametric tests based on rank transformations [Holt and Sullivan, 2023], there are certain situations where a rank-transformation approach is desirable; for example, when dealing with ordinal data and outliers.To accommodate a wider range of data types, future work aims to develop permutation-based solutions for nonparametric tests based on rank transformations, e.g.Sign test, Wilcoxon signed rank test, Mann-Whitney U-test, Kruskal-Wallis test, Friedman test, etc. PERMUTOOLS is maintained by the corresponding author but accepts contributions from the research community at large.If you would like to contribute to PERMUTOOLS, or request that a specific statistical test or feature be added to it, please email the corresponding author at the email address provided above, or use the formal channels on GitHub, such as creating a pull request or opening an issue.
Figure 1 :
Figure1: Permutation distributions for two-tailed tests based on the t-statistic (Blue: uncorrected, Red: maxcorrected).Multivariate data were randomly generated to simulate two independent samples, both consisting of 20 variables (i.e.adjusted for 20 comparisons), each with 30 observations (see example code below).The uncorrected permutation distribution shown is that generated for the third test (i.e.variable 3).
Figure 2 :
Figure2: Results of the permutation tests and effect size analysis.Left: Test statistic (unstandardised) and 95% CI (parametric and permutation) for each test.Middle: P-value (parametric and permutation) for each test.Right: Effect size (Hedges' g) and 95% CI (parametric and bootstrapped) for each test. | 3,568.6 | 2024-01-17T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
A Computational Quantum-Based Perspective on the Molecular Origins of Life’s Building Blocks
The search for the chemical origins of life represents a long-standing and continuously debated enigma. Despite its exceptional complexity, in the last decades the field has experienced a revival, also owing to the exponential growth of the computing power allowing for efficiently simulating the behavior of matter—including its quantum nature—under disparate conditions found, e.g., on the primordial Earth and on Earth-like planetary systems (i.e., exoplanets). In this minireview, we focus on some advanced computational methods capable of efficiently solving the Schrödinger equation at different levels of approximation (i.e., density functional theory)—such as ab initio molecular dynamics—and which are capable to realistically simulate the behavior of matter under the action of energy sources available in prebiotic contexts. In addition, recently developed metadynamics methods coupled with first-principles simulations are here reviewed and exploited to answer to old enigmas and to propose novel scenarios in the exponentially growing research field embedding the study of the chemical origins of life.
Introduction
1953 may be considered as an annus mirabilis for our comprehension of fundamental aspects related to life as we know it and its chemical origins. In fact, during that year, two pioneering and outstanding papers were published in Nature [1] and in Science [2] journals. Whereas the former, reported by Watson and Crick [1], unveiled the molecular unit containing all the necessary information for the functioning of all living beings (i.e., the deoxyribonucleic acid, DNA), the latter-gone down in history as the Miller-Urey experiment [2]-succeeded in the identification of some plausible chemical pathways leading to the first biogenic molecules from simple organic and inorganic precursors.
The Miller-Urey experiment [2] posed the basis for the onset of a novel scientific branch currently known as prebiotic chemistry, based on the theoretical hypothesis of the chemical evolution proposed in the thirties of the previous century separately by Alexander I. Oparin and John B. S. Haldane [3]. This relatively young research field encompasses multi-and interdisciplinary chemical approaches aimed at investigating the fundamental mechanisms underlying the transformation of simple molecular units into more complex species directly associated with the onset of life [4]. Since 1953, a rather conspicuous number of papers were published proposing various prebiotic environments and different energy sources potentially compatible with realistic primordial scenarios [5][6][7][8][9][10][11][12][13][14][15]. Additionally to specific molecular building blocks, indeed, peculiar catalytic energy supplies (i.e., intense electrical discharges typical of lightning phenomena, UV light, strong mechanical stresses generated under meteorite impacts, high-pressure/high-temperature regimes found in hydrothermal vents, etc.) are necessary to trigger potentially relevant prebiotic reactions [16][17][18][19][20][21]. These energy sources were essential to drive a set of highly diverse simple molecules bearing elements such as hydrogen, oxygen, carbon, nitrogen, and phosphorus (just to cite the most abundant ones) toward complex biological molecules such as DNA and ribonucleic acid, RNA.
Although DNA and RNA are stable molecules capable of storing genetic information, the presence of the hydroxyl group on the 2 carbon of the ribose of RNA (and its absence on the 2 carbon of the deoxyribose of DNA) renders this latter a more reactive species with respect to DNA [22,23]. This evidence, along with other important aspects, led to the proposal of the "RNA-first" or "RNA-world" hypothesis where-rather than DNA-RNA is considered the pivotal molecular precursor of the first living beings appeared on our planet [12]. In fact, the lower stability accompanied by its self-replicating capability makes RNA a better candidate than DNA in the prebiotic realm. However, there is the question of how the RNA and DNA building blocks such as sugars and nucleobases, along with amino acids ascribed to life, were primordially synthesized. This latter question represents the key enigma that prebiotic chemistry tries to answer [24,25].
As anticipated, the Miller-Urey experiment [2] reported on the first experimental evidence of the possibility to convert simple inorganic and organic molecules, widespread in Earth-like primordial atmospheres, into prebiotic building blocks of life-giving molecules upon exposure to electric discharges mimicking lightning phenomena. Since then, a plethora of other experiments were performed (see [26,27] and references therein) not only with the aim of understanding the generality of the Miller-Urey experiments but also by employing diverse starting compounds such as hydrogen cyanide (HCN). As shown by Orò [8,9], indeed, a concentrated aqueous solution of ammonium cyanide (the ammonium salt of HCN), is capable of undergoing multiple chemical transformations affording adenine-one of the nucleobases-by simple heating. Moreover, in addition to amino acids and nucleobases, the primordial formation of sugars-within the so-called formose reaction [28][29][30]-also holds a pivotal role in the seminal steps of early molecular evolution leading to RNA. At the same time, some hypotheses proposed that other simple molecules could have served as feedstock molecules in prebiotic synthetic processes. One of these suggests that formamide (H 2 NCHO) could have accumulated in sufficiently high amounts to serve as the building block and the reaction medium for the synthesis of the first biogenic molecules [14,31].
Notwithstanding all these experimental evidences established the foundation and the playground of prebiotic chemistry, owing to the exponential growth of the supercomputing power [32,33] nowadays all experimental procedures are routinely supplemented by numerical simulations. These computations are capable of shedding light on the molecular mechanisms governing the transformation of simple molecules into more complex ones under reliably simulated prebiotic scenarios. In particular, by exploiting the robust computational toolkit stemming from quantum mechanics and density functional theory (DFT), traditional static quantum chemical calculations and advanced ab initio molecular dynamics simulations have recently been capable of reintroducing potentially relevant prebiotic molecules-such as, e.g., formamide itself-in the whole framework concerning the chemical origins of life. In fact, although formamide as a prebiotic molecule was already considered by early experimental studies in the 1970s, due to the works of Schoffstall [34] and Yamada et al. [35] and reconsidered 20-30 years later by Saladino and Di Mauro [31], only during the last decade computations have been capable of corroborating the prebiotic role of this key intermediate species [36][37][38][39][40][41][42][43][44][45][46]. Understanding this new conceptual framework where it is feasible to monitor all the electronic, atomic, and molecular processes involved in prebiotic chemical reactions owing to the exploitation of quantum mechanical calculations, makes it possible not only to test new hypotheses but also to deepen and accelerate our comprehension of the general patterns behind the complexity of the chemical origins of life.
In the current minireview, we outline the key findings obtained in selected investigations employing supercomputing approaches that recently enabled the microscopic understanding of the onset of the building blocks of life, such as amino acids and sugars, not only under plausible prebiotic conditions but also in the whole Universe.
Theory
Lightnings, with their associated strong electrostatic potential gradient, are capable to trigger several transformations in matter at the molecular level, also of prebiotic interest. From a genuine theoretical and computational perspective, the inclusion of the catalytic effects triggered by the application of intense electric fields on matter was problematic since the origins. In fact, since the definition of polarization, a long story witnesses the efforts devoted for developing and implementing methods suitable for the description of electric fields in density functional theory (DFT) algorithms [47][48][49][50][51][52].
The toughest problem is posed by the periodicity of the simulation box. Because of the nonperiodic quantum position operator, periodicity in presence of an electric field E leads to a significant modification of the electron potential in all replica of the simulation box. Incidentally, the ground state is also ill-defined [50,53,54]. Several perturbative approaches for the implementation of electric fields were reported, but only owing to the Modern Theory of Polarization and Berry's phases such a tricky problem was successfully solved [48,49,55], as also summarized by some of ours in [56]. Alternatively, also the effective screening medium (ESM) method, which precisely overcomes this issue-especially when simulating charged electrodes surfaces-may be invoked [57]. On the other hand, following the Modern Theory of Polarization, Nunes and Gonze [51] proved that perturbative analyses can be straightforwardly earned from a variational principle relying on the minimization of the following functional In this equation, E KS ({ψ kn }) is the Kohn-Sham (KS) energy depending on the occupied Bloch's functions while P({ψ kn }) represents the zero-field Berry's phase of the polarization associated with the electrons. Within the KS scheme, Berry's phase polarization of the noninteracting KS system is not exact [58,59]. A strategy for circumventing this issue is that of taking into consideration a more general Hohenberg-Kohn theorem where the electron density n(r) and the polarization P(r) feature in a unique manner all ground-state properties [58,59]. The authors of [47] admirably demonstrated that the functional (1) is usable as energy functional for a variational approach also within the finite-field scenario. This way, the computation of the polarization P under the action of electric fields affords the solution to the problem of calculating all properties of an insulator in an homogeneous electric field. Indeed, the inclusion of Berry's phase [55] polarization into Equation (1) solves the problem. More specifically, by considering an electric field along a given direction, the following energy functional can be written where E 0 [{ψ i }] is the energy functional in absence of the field and P[{ψ i }] is the polarization defined by [52]: where L is the periodicity of the system (i.e., the edge of the simulation box) and S[{ψ i }] is Such a formulation can be extended to also yield the forces. By implementing the following quantity to (2) where P ion represents the polarization associated with the nuclei, R i is the spatial coordinate along the axis of the field, whereas Z i is the charge stemming from the protons in the nuclei. Clearly, such a specific expression generates a component to the force present on the ith atom, corresponding to F i = E Z i . Such a pioneering theoretical framework [47] has been extensively used in most of the computations summarized in the current minireview.
Another sophisticated computational method employed in the works summarized in the current minireview deals with the modeling of the propagation of single as well as multiple shock waves in matter systems. This approach is known as the multiscale shock-compression simulation technique (MSST) [60] and it is based on the Euler equations for compressible flow. Such equations permit the conservation of mass, momentum, and energy in all points belonging to the wave. One of the major strengths of this approach is the possibility of exploring the relevant configuration space within the timescales affordable by common molecular dynamics simulation techniques. In fact, the MSST allows for the dynamical simulation of condensed-phase systems under dynamical shock conditions for orders of magnitude longer time periods than possible using standalone nonequilibrium molecular dynamics approaches, offering a unique computational advantage in computationally demanding simulations such as first-principles molecular dynamics. Last but not least, this approach relieves typical finite-size issues related with the simulation box, while at the same time allowing for a realistic description of the chemistry under extreme conditions, a unique combination of computational benefits rendering this advanced technique perfectly suited for the simulation of meteorite impacts, dust grain collisions, and other catastrophic mechanical events plausible for the onset of prebiotically relevant chemical species [36,39].
Simulations
Computations and numerical simulations discussed in this minireview were executed either exploiting the Car-Parrinello [61] (CPMD) approach or by using Born-Oppenheimer molecular dynamics (BOMD) techniques, which fall under the umbrella of the ab initio molecular dynamics (AIMD) methods. The software suite Quantum ESPRESSO [62] was employed when running CPMD simulations. On the other hand, when the BOMD formalism was adopted by us, the software package CP2K [63,64] was used. In all cases where the effects triggered by lightning phenomena were investigated, several liquid samples of disparate chemical nature were posed under the action of static and homogeneous electric fields applied along a given direction. As usual, for the sake of comparison, several prolonged AIMD simulations were always performed at zero field.
In all cases in which a uniaxial shock compression was applied to investigate a potential impact-induced chemistry, the multiscale shock-compression technique (MSST) [60] was adopted. The shock wave was propagated along the x-axis of the employed reference system. After compression, decompression of the simulation boxes enabled the analysis and identification of the most stable chemical reaction products. We tested uniaxial shock-wave velocities ranging from 6 to 10 km·s −1 producing several shock-compressed thermodynamic states by means of the MSST in trajectories having an approximate length of about 5-10 ps. The most compressed state was then decompressed toward its final decompressed pressure corresponding to the unperturbed (starting) simulation cell.
As for all the specific technical details on the numerical boxes simulated in the investigations here reported, we kindly invite the interested reader to refer to the original publications (see, e.g., [18,[38][39][40]). The investigated samples include many different kinds of liquids, from systems replicating the original Miller-Urey samples [38] to aldehyde aqueous solutions [40], and from prototypical simple mixtures present in dust grains [39] to planetary atmospheres compositions [18]. When an electric field was applied, the intensity of the electric field was gradually increased with a step increment of 0.05 V/Å from zero up to ∼0.50 V/Å. Albeit nuclear quantum effects may play a role in assisting specific electric-field-driven chemical reactions [65], in all cases here summarized the dynamics of the nuclei was treated classically by invoking the Verlet algorithm. Canonical sampling has generally been executed in CPMD simulations through the coupling of the system under investigation to the Nosé-Hoover thermostat or, during BOMD computations, by exploiting CSVR thermostats [66].
Several DFT exchange-correlation functionals were employed depending on the specific system under investigation. Among them, the Perdew-Burke-Ernzerhof (PBE) [67] and the Becke-Lee-Yang-Parr (BLYP) [68,69] functionals represented the most frequently employed ones. In addition, with the aim of taking into account dispersion interactions, semiempirical corrections (i.e., PBE-D3 and BLYP-D3) [70,71] were used. While in most of the CPMD simulations, cutoff energies of 35 Rydberg (Ry) and 280 Ry allowed us to adopt timesteps of ∼0.10-0.12 fs, in BOMD simulations, plane-wave cutoffs of 400 Ry were generally imposed. Moreover, as usual for BOMD, timesteps larger than those used for CPMD were almost always set (i.e., 0.50 fs). Whereas in CPMD, the core electronic interaction was treated via ultrasoft pseudopotentials (USPP), in BOMD, Goedecker-Teter-Hutter pseudopotentials using the GPW method were generally employed. In AIMD, numerical calculations were executed by means of the CP2K software and the wavefunctions of the atomic species were expanded in triple-zeta valence plus polarization (TZVP) basis sets, whereas during CPMD simulations, only plane waves were employed.
Albeit AIMD simulations represent one of the most efficient computational tools for mimicking the chemical nature of matter, the timescale of current AIMD simulations are limited by the relatively high computational demand of quantum-based computations. As a consequence, determination of accurate free-energy surfaces (FES) is rather difficult by means of AIMD simulations only. On the other hand, there exists an efficient numerical approach-known as metadynamics (MetD) [72]-which is able to accelerate a given process, allowing the system to visit the most likely stability valleys. At the same time, MetD reproduces the free-energy landscape of the process. In order to find smart and general reaction coordinates, a path Collective Variables MetD [73], was developed and an updated version of it has been reported [74,75]. This powerful method has been exploited owing to the usage of PLUMED [76,77] to quantitatively determine the free energy associated with the first fundamental reaction step of the formose reaction [40] and to reproduce the conversion of glycine into ethanolamine at different pressures.
Amino Acids Synthesis: The Miller-Urey Experiments
Being the first experiment succeeding in the identification of some plausible chemical pathways leading to the first biogenic molecules from simple organic and inorganic precursors, the Miller-Urey experiment is considered the progenitor of the research branch currently known as prebiotic chemistry. The original Miller-Urey experimental setup was composed of a flask where water was boiled. The formed water vapor could evaporate, through a tube, into another flask containing methane (CH 4 ), ammonia (NH 3 ), and molecular hydrogen (H 2 ). This mixture was exposed to a strong electrostatic field generated by a Tesla coil simulating the intense lightning phenomena plausibly present in the relatively unstable primordial atmosphere. After such a step, the gas was condensed into the first flask and cyclically subjected to the same process. A recent reproduction of the Miller-Urey experiments, highlighting the (previously overlooked) role played by the borosilicate glasses composing the surfaces of the flasks, was conducted by Saladino, Di Mauro, and coworkers [78].
After one week since the beginning of the experiment in 1953, Miller observed a visible change of the color of the original mixture. In fact, the latter took a red coloring, suggesting that some chemical transformations occurred in the samples. By analyzing the produced mixture via paper chromatography, it was evident that α-alanine, β-alanine, and glycine-the simplest amino acid-were formed during the process. This seminal experiment revealed, for the first time, that simple, inorganic and organic, and relatively inert species could be directly transformed into prebiologically relevant molecules such as amino acids by means of the simple exposure of the original samples to an ubiquitous energy source, abundantly available in the primordial terrestrial atmosphere [6,20,79].
Intuitively, Miller and Urey proposed that in their experiment, the amino acid products form according to the Strecker synthesis [5]. This latter is a chemical process in which amino acids are synthesized through α-amino nitriles that form by the reaction of simple aldehydes with ammonia (NH 3 ) and hydrogen cyanide (HCN). On the other hand, the experimental apparatus they exploited was not adequately equipped to prove the hypothesized elementary reaction steps. Indeed, presence of amino nitriles or HCN was not detected in the reaction products. After all, on-the-fly tracking of fast chemical reactions occurring on timescales on the order of hundreds of femtoseconds (fs) represents a hard task also for modern-day reaction kinetics studies. By contrast, as it will be laid out in the following, this kind of information is achievable by means of sophisticated and robust first-principles simulations.
Ab initio molecular dynamics (AIMD) approaches are indeed particularly useful when investigating specific physical and chemical transformations taking place at a submicroscopic level providing a complete picture of all the steps underlying given chemical reactions. In 2014, about 60 years after the original experiments, the in silico reproduction of the Miller-Urey experiments was reported for the first time [38] by exploiting the powerful tools provided by Car-Parrinello molecular dynamics techniques [61]. One of the main findings of this work was that of proving the key role played by some intermediate species of the likes of formic acid (HCOOH) and formamide (H 2 NCHO) in assisting the conversion of the simple and quite inert molecules employed by Miller and Urey (i.e., H 2 O, CH 4 , NH 3 , H 2 , etc.) into the simplest amino acid glycine [38]. In fact, it turned out that the formation of molecular species such as formaldehyde (H 2 CO) and hydrogen cyanide (HCN), previously and commonly thought to be fundamental for the Strecker synthesis of amino acids from the Miller-Urey samples [5], is an energetically unfavored process at the level of theory at which these mechanisms were explored (i.e., density functional theory (DFT) at the generalized gradient approximation (GGA) without dispersion interactions). Interestingly, from a series of unbiased Car-Parrinello molecular dynamics simulations, a completely different route heading toward the synthesis of prebiotically important molecules has been identified in this work. In particular, one molecular hydrogen, one carbon monoxide, and one ammonia molecule recombined during these simulations under the action of externally applied static and homogeneous electric field strengths on the order of 0.50 V/Å to directly produce a formamide molecular species. Incidentally, such a molecule represents a key intermediate found in independent prebiotic reactions taking place under multiple circumstances [17,39,80,81]. Moreover, additionally to formamide, in a few steps of Car-Parrinello dynamics under these extreme external conditions, the synthesis of 2 formic acid species obtained via the concerted recombination of an hydroxide anion (OH) − with a carbon monoxide CO species neutralized by a proton H + diffusing across the hydrogen-bond network of the liquid mixture by means of the typical Grotthuss mechanism was directly observed [38].
During this kind of numerical experiments, the external electric field is gradually increased. In the in silico reproduction of the Miller-Urey experiment, when the field intensity reached 0.50 V/Å, all the relevant chemistry also observed in the laboratory experiments took place. In fact, once formed, formamide then either fueled the synthesis of larger and more complex molecules, or broke down into water and hydrogen cyanide, as observed in the standard Strecker reaction [5,6,79], and as detailed in the following: In parallel, a quite less plausible reaction was observed between water and carbon monoxide, which gives rise to formic acid: After few steps of Car-Parrinello molecular dynamics, a carbon monoxide combined with a formamide molecule, a complex that spontaneously broken into a hydrogen cyanide and a formate anion. These latter combined with a formamide-proton cation to yield a α-hydroxyglycine: Once formed, α-hydroxyglycine evolved into glycine [38], proving the key catalytic role played by the external electric field and the power of DFT-based simulations in disclosing otherwise-difficult-to-achieve information.
Another pivotal result afforded by those simulations was the attempt to quantitatively measure the thermodynamic supply provided by the external field during the observed chemical reactions. In fact, most of the detected chemical transformations occurred on timescales on the order of 2-3 picoseconds (ps) and were accompanied by a potential energy drop, indicating favorable exothermic mechanisms [38]. In addition, some preliminary simple metadynamics calculations were executed in this seminal work to figure out the freeenergy contribution carried by external static fields when some crucial carbon-to-nitrogen (Figure 1a,c) and carbon-to-carbon (Figure 1b,d) are formed in the system, as shown in Figure 1.
Although these additional simulations showed that the application of an electric field is capable of decreasing the height of the free-energy barrier, the fact that a significant finite barrier still persists at a strong field intensity of 0.50 V/Å witnesses that the choice of the simple interatomic distance between species forming either C-N or C-C bonds does not represent a good choice for the reproduction of the complex multidimensional freeenergy landscape. This way, additional simulations performed by means of metadynamics techniques clarified some of these crucial aspects [82]. Further DFT-based and metadynamics simulations were executed in direct connection with more recent Miller-Urey-like experiments by Ferus et al. [20].
A marginal note is finally devoted to the role held by the presence of ionic species created by the action of the field. In fact, all the presented reactions were not caused by the presence of ions in solution but is the field itself to be the key ingredient in all the prebiotic processes described in this section. In fact, by running a series of simulations of the same initial set of the Miller-Urey molecules in complete absence of the external electric field but replacing all water and ammonia molecules with water and ammonia ion, no significant reaction with exception of trivial neutralization/recombination processes were observed [38], proving the fundamental role of the electrostatic field, as also more recently proven by investigating the electrochemical behavior of glycine and alanine on a biased platinum surface by quantum electrochemistry [46]. Obviously, it is not possible to rule out that in real systems, radical species could contribute to the observed reactivity; such factors are not satisfactorily considered in DFT calculations. On the other hand, all these results confirm that the electric field is not just useful to dissociate molecules into ions but that it also represents an order-maker agent favoring the assembling of larger chemical units from smaller ones, and hence affording chemical complexity.
Sugars Synthesis: The Formose Reaction
The formose (portmanteau of formaldehyde and aldose) reaction is a series of relatively complex autocatalytic reactions yielding sugars from simple aldehydes. It was discovered in 1861 by Butlerov, almost a century before the prebiotically relevant results earned by Miller and Urey. Thus, at the time of the discovery of the formose reaction, neither connections nor implications were made to the prebiotic chemistry framework, also because the existence and the chemical structure of DNA and RNA were not known yet. Since both DNA and RNA have a "sugarous" backbone, it is likely that the formose reaction could have played a role toward the synthesis of sugars within the chemical origins of life paradigm.
Since its discovery, many scientists proved many of the (almost) endless possible pathways that this complex chemical reaction network can undergo. The conversion of simple aldehydes into simple sugars was first observed in a system containing formaldehyde in an alkaline solution [28]. Under these conditions, the formose reaction occurs spontaneously and yields multifaceted mixtures of sugars. These latter, however, further reacts to form an insoluble polymeric material, even though ribose could be selected by borate minerals [83]. Another circumstance in which this process experimentally takes place is in presence of catalytic clays capable-similarly to the alkali-catalyzed formose reaction-to yield sugars in a nonselective manner. For instance, it has recently been shown that UV irradiation of astrophysically relevant ice mixtures containing simple species such as ammonia (NH 3 ), methanol (CH 3 OH), and water (H 2 O) leads to the formation of aldehydes and of simpleprebiotically relevant-sugars such as glycolaldehyde and glyceraldehyde [30]. These latter are, indeed, among the possible chemical intermediates in the ribonucleotide synthesis.
As mentioned in the Introduction, RNA is assumed to be the first biological molecule appeared on our planet within the so-called "RNA-world" hypothesis. This is mainly due to the presence of the sugar ribose in its backbone, which confers to RNA a sufficiently high reactivity enabling this molecule to perform a catalytic function. This latter feature may have held a key role during the first molecular evolutive steps toward more complex entities. Thus, understanding how ribose and its multiple precursors could have been produced under primordial Earth-like conditions is of primary interest in prebiotic chemistry and astrobiology. It is well established from several laboratory experiments that ribose can be formed only in modest quantities during different types of formose reactions and that it quickly caramelizes to an insoluble tar [29]. More recent investigations have proved the possibility to conduct a whole formose reaction up to ribose and ribose-related compounds under UV radiation [84]. Due to the brief lifetime of ribose observed in some experiments, other "XNA theories", involving simpler RNA precursors formed by either a threose or a peptide backbone capable of storing and transcribing genetic information but holding a simpler and more stable moiety, have been proposed [85][86][87][88].
Notwithstanding the presented difficulties in executing a formose reaction, the most famous weakness pertaining to the traditional view of the formose reaction is represented by the formation of the first carbon-to-carbon (C-C) bonds from formaldehyde (H 2 CO)the simplest aldehyde-which represents the rate-limiting step of the formose reaction. In fact, in order to observe the reaction leading from formaldehyde to glycolaldehyde the so-called umpolung (i.e., the polarity inversion) of formaldehyde should occur, which is manifestly a thermodynamically and chemically disfavored process, independently from the boundary operational conditions. Although such an evidence is quite manifest, only in the last years it has been possible to calculate via ab initio molecular dynamics and advanced metadynamics methods exploiting path Collective Variables as reaction coordinates [74,89] the free energy associated with the umpolung event [40]. It emerged that to synthesize a glycolaldehyde molecule (HOCH 2 CHO), the simplest sugar (a diose) and the second simplest aldehyde after formaldehyde, in a formaldehyde aqueous solution, an initial release of protons is required from one of the formaldehyde molecules [40]. To cross the free-energy barrier separating those basins, +30 kcal·mol −1 first have to be invested. Furthermore, the synthesis of glycolaldehyde takes place only by climbing an uphill and steep free-energy surface connecting two metastability basins having a +40 kcal·mol −1 difference. Though this latter free-energy barrier is about 10 kcal·mol −1 lower than that found for its gas-phase counterpart [90], advanced quantum-based molecular dynamics simulations have quantitatively proved why the first step of the formose reaction (i.e., the conversion of formaldehyde into glycolaldehyde) is definitely unlikely [40]. As discussed in the previous sections, the application of intense electric fields on hydrogen-bonded systems significantly enhances the proton transfer activity [38,65,[91][92][93][94] and, more interestingly, is able to open otherwise difficult-to-achieve reaction pathways [89,92,[95][96][97]. In addition, it is well-known that protons are fundamental in lowering the free-energy barrier of the hypothesized synthesis of glycolaldehyde in the gas phase [90]. Based on this knowledge and on the quantitative evidence that starting a formose reaction from formaldehyde is hampered by a disfavored free-energy landscape, Cassone et al. performed a de novo Miller-Urey-like experiment from glycolaldehyde aqueous solutions. Moreover, glycolaldehyde is ubiquitous in the universe and may have been delivered on the primitive Earth from extraterrestrial sources [90,98,99]. This way, simulation boxes containing glycolaldehyde aqueous mixtures were exposed to increasingly higher field strengths from zero up to ∼0.5 V/Å in a series of Car-Parrinello and Born-Oppenheimer molecular dynamics simulations [40]. The first chemical fieldinduced modifications in the system was recorded for a field strength of ∼0.3 VÅ, which inter alia corresponds to the water dissociation threshold as determined by DFT-based approaches [65,91]. Thus, at this field intensity, hydroxide (OH) − and hydronium (H 3 O) + ions are concomitantly present in the solution along with glycolaldehyde and water molecules. The presence of these two species creates local electric fields even stronger than those applied externally. In fact, it is well-established that fields larger than ∼1 V/Å are typically measured in proximity of water counterions [65,100,101] and that these latter enhance the overall chemical reactivity. In fact, starting from field intensities of ∼0.4 V/Å, a newly synthesized 1,2-ethenediol molecule is detected in the simulation box ( Figure 2a) and an enol tautomer of glycolaldehyde is produced in the system owing to a proton transfer with the surrounding molecular environment, as shown in Figure 2b. Whereas the presence of such a new compound in the simulated numerical sample favors a proton transfer from the nearby glycolaldehyde molecule to the enolate, the simultaneous presence of the external electrostatic potential gradient triggers the shortening of the distance between the two species leading to the stabilization of the newly formed C-C bond (Figure 2c). This way, a fast proton recombination suddenly leads to the production of (D)-erythrose (Figure 2d), a tetrose sugar which is the direct precursor of the pentose ribose. Figure 2. Formation of (D)-erythrose as reported and detailed in [40]. Carbon-to-carbon (C-C) distances are highlighted in Å. When an aqueous solution of glycolaldehyde was exposed to a static and homogeneous electric field of 0.45 V/Å, a newly synthesized 1,2-ethenediol reacts with a glycolaldehyde species (a) giving birth to a glycolaldehyde enolate (b). This way, the creation of a novel C-C bond takes place (c) and a (D)-erythrose molecule is synthesized rapidly (d).
A key feature of these kind of advanced computational techniques is that they offer the possibility to track and directly visualize not only the behavior of the nuclei but also that of the electron densities to better interpret the field-induced formation of a new carbonto-carbon bond in a system originally composed of glycolaldehyde and water, and also analyses of the kind of the frontier molecular orbital (FMO) may be exploited. By evoking the FMO theory [102], informative molecular orbitals such as the highest occupied molecular orbitals (HOMO) and the lowest unoccupied molecular orbitals (LUMO) of the two interacting molecules, which for a field strength of ∼0.5 V/Å yielded (D)-erythrose in a glycolaldehyde-water mixture, were determined [40]. The HOMO of the enol tautomer-subsequently evolving into a glycolaldehyde enolate species-and the LUMO of the neighbor glycolaldehyde molecule were evaluated, as shown in Figure 3. The most reactive HOMO of the 1,2-ethenediol species is delocalized all along the molecule, whereas the first LUMO of glycolaldehyde is located between the oxygen of the carbonyl group and the α-carbon (Figure 3a). Although this latter atom is close to the oxygen atoms of the 1,2-ethenediol, the confined spatial extent of the LUMO along with a nontailored phasing characterizing the interaction between the closest HOMO and LUMO prevents the formation of a carbon-to-oxygen covalent bond between these molecules (Figure 3a). Rapidly, indeed, (as highlighted in Figure 2), the system finds a configuration exhibiting the most expanded HOMO of the 1,2-ethenediol and the biggest LUMO of the glycolaldehyde as first-neighbor molecular orbitals which are in-phase. All these circumstances lead to the formation of the C-C bond producing the (D)-erythrose molecule [40]. In summary, advanced computational methods revealed that Miller-Urey-like experiments are capable not only to create amino acids but also to form sugars from simple and ubiquitous constituents. Again, the application of intense static electric fields is able to trigger the formation of new C-C bonds, a covalent bond which is at the basis of most biological species.
Meteorite Impacts: A Source of Chemical Complexity
Although lightning phenomena may have represented a key ingredient for the early molecular evolution of life in, e.g., planetary atmospheres, many other energy sources were available slightly before the onset of the first living organisms on our planet. In fact, in a period between approximately 4.1 to 3.8 billion years (Ga) ago-at a time corresponding to the Neohadean and Eoarchean eras on Earth-a huge number of asteroids collided with the early terrestrial planets in the inner Solar System, including Mercury, Venus, Earth, and Mars [103]. For instance, estimates of the Earth's impact flux prior to ∼4 Ga suggest an annual energy deposition larger than 10 20 J [104]. This enormous reservoir of collisional energy might have driven the complexification of small prebiotic feedstock molecules into biogenic compounds. Moreover, the fact that a plethora of violent shock impacts takes place in the whole Universe not only by means of collisions between asteroids and planetary surfaces but also between dust grains in dense molecular clouds indicates that the chemical complexity observed in some regions of the deep space might be the result of these peculiar-as well as ubiquitous-mechanical processes. The fact that glycine and other prebiotically relevant species were detected in non-negligible amounts in some comets, such as Halley, Hyakutake, Tempel-1, Giacobini-Zinner, Hartley 2 and Hale-Bopp, 81P/Wild 2 [105][106][107][108][109][110][111], indirectly suggested the possibility to observe and reproduce a relatively complex chemistry when high-energy impacts take place. In fact, many interesting experimental and computational results emerge from disparate investigations of the catalytic effects produced by violent mechanical stresses acting on simple inorganic systems [7,16,18,36,39,42,42,104,[112][113][114][115][116][117][118][119][120]. As an example, Martins et al. [16] created several icy bullets composed of simple compounds ubiquitous in cometary ices such as ammonium hydroxide (NH 4 OH), carbon dioxide (CO 2 ), and methanol (CH 3 OH), which were impacted on a series of rocky surfaces-mimicking planetary surfaces-by using a light gas gun available at an important facility located at the University of Kent [121]. This gun is able to shoot the samples at incredible speeds on the order of v i ∼7 km·s −1 , which corresponds to the mutual collisional speed of typical meteorite impacts. The shocked ice samples were then analyzed showing the presence of a series of amino acids, including glycine, (D)-alanine, (L)-alanine, and (L)-isovaline [16]. Similar experiments have more recently been performed by Singh et al. [120] at the same facility [121] by using more complex bullets composed of icy amino acids which, after being shot at a speed of ∼5 km·s −1 , have been transformed into complex biological structures such as peptides and membranelike structures.
A completely different kind of experiment on this topic is that typically performed by Ferus and coworkers (see, e.g., [42]). In fact, they exploit a high-power kJ-class laser system (PALS-Prague Asterix Laser System) with a pulse duration of 350 ps, a typical wavelength of 1315.2 nm, and possessing an energy of 150 J per pulse. Albeit under some specific operational configurations, PALS may be used to mimic lightning phenomena [122]; such a high-energy pulse creates in the shocked samples the thermodynamic conditions typical of those occurring in meteorite/cometary impacts, where a plume (i.e., a plasma) is locally produced due to the enormous collisional energy released. In one of these experiments, Ferus' group focused such a powerful laser beam on a formaldehyde aqueous solution to mimic the formation of such a high-temperature plasma in presence of simple formaldehyde (H 2 CO) and water (H 2 O) molecules [42]. After a series of laser pulses, a Fourier transform infrared (FTIR) spectrometry clearly showed, on the one hand, the decomposition of formaldehyde into simpler species (i.e., carbon monoxide, carbon dioxide, methanol, etc.) and, on the other, the formation of interesting species such as 2-amino-2-hydroxy-acetonitrile and 2-amino-2-hydroxy-malononitrile [42]. As also proved by the same group in similar experiments, these latter molecules are key intermediate species in the formation of all RNA nucleobases (adenine, cytosine, guanine, thymine, and uracil) [19,123]. Of course, it is quite expensive and operationally complicated to setup experiments generating high-energy events and, with the aim of testing promising chemical pathways deserving to be further investigated in laboratory experiments, advanced computational techniques such as ab initio molecular dynamics coupled with the multiscale shock-compression technique (MSST) [60], are currently employed. As it will be pointed out in the following, these approaches are capable to directly reproduce and monitor the chemistry behind the transformation of inorganic into organic compounds upon violent shock-impact exposure of the original samples.
In a pioneering work exploiting crude (but computationally efficient) tight-binding density functional theory (TB-DFT) approximations, Goldman et al. [36] simulated the effects produced by shock impacts on a prototypical cometary ice mainly composed of ammonia, water, and methanol. These simulations not only showed that shock waves are able to drive the synthesis of transient C-N-bonded oligomers at extreme pressures and temperatures but also that upon quenching to lower pressures, these oligomers break apart to form a metastable glycine-containing complex [36]. Inspired by these findings and owing to a substantial improvement of the computational capabilities accomplished in that decade, Cassone et al. [39] simulated-by means of rigorous DFT-based molecular dynamics-the physical and chemical effects created during collisions between dust grains, where molecular hydrogen (H 2 ) is certainly abundant and in presence of the simplest compound bearing all the primary biogenic elements: isocyanic acid (HNCO). In particular, such a mixture was subjected in a series of diverse simulations to unidirectional shock-wave compressions propagating at different speeds (i.e., 6,7,8,9, and 10 km·s −1 ). Although at 5 and 6 km·s −1 , no chemical changes were recorded, a significant strengthening of the intermolecular interactions was reported. By increasing the velocity up to 7 km·s −1 , the formation of new chemical species in the simulated box was observed. In particular, the synthesis of formamide and carbamoyl isocyanate was detected [39]. Even though only a small percentage of the reactant molecules was transformed, such an initial result not only indicated that some chemical complexity can be generated via meteorite impacts even in an extremely simple sample composed just of H 2 and HNCO but also that formamide may act again as the springboard for further molecular complexification. As shown in Table 1, formamide and carbamoyl isocyanate are the only species formed also in the samples impacted by uniaxial shockwaves propagating at a speed of 8 km·s −1 which, inter alia, produces an instantaneous Hugoniot temperature T H equal to 1550 K and a peak pressure of 58 GPa in the most compressed state. However, a further increase of the velocity of collision triggers a pervasive chemical response in the simulated samples. Figure 4a, where the hydrogen-hydrogen (H-H) radial distribution functions (RDFs) are plotted, at peak pressures of 65 and 72 GPa-corresponding to shock speeds of 9 and 10 km·s −1 , respectively-most of the molecular hydrogen originally present in the system reacted to form some other species. Incidentally, the drastic reduction of the H-H RDF first peak at the most extreme conditions (10 km·s −1 , 72 GPa) coincides with the onset of a novel first peak in the carbon-carbon (C-C) RDF, as shown in Figure 4b. The fact that this new first maximum is located at ∼1.4 Å is the fingerprint of the birth of new C-C bonded species in the sample. As also listed in Table 1, the most important species formed are water, hydrogen cyanide, formic acid, glycine and other organic molecules-such as ethanimine (HNCHCH 3 ) and vinylamine (H 2 NCHCH 2 )-which are the precursors to 7 amino acids: asparagine, aspartic acid, cysteine, leucine, phenylalanine, serine, and tyrosine. This way, it turned out that the transformation from chemical simplicity to chemical complexity can occur rapidly within the transient events following catastrophic impacts. Moreover, the fact that samples containing the most widespread molecule in the universe (H 2 ) and the simplest compound bearing all of the primary biogenic elements (HNCO) can be transformed into glycine and amino acids precursors may suggest that the onset of complex structures ascribed to the birth of life as we know it may be a general process, ubiquitous in the whole universe. Finally, we conclude the present minireview by presenting some original results showing that investigations on highly compressed molecular systems can be conducted via the coupling of ab initio molecular dynamics simulations with enhanced sampling techniques such as advanced metadynamics methods [72]. In particular, we have chosen an exemplary chemical reaction where glycine (H 2 NCH 2 COOH)-the simplest amino acid also found in the Murchison meteorite-is transformed into ethanolamine (H 2 NCH 2 CH 2 OH), a precursor of phospholipids recently detected in a molecular cloud in the interstellar medium [124]. Furthermore, amino alcohols are believed to stabilize short oligonucleotide sequences present in the prebiotic pool [125].
In particular, we have simulated the transformation of glycine into ethanolamine in a hydrogen-containing environment under different thermodynamic states corresponding to 4 different pressures (i.e., 1, 10, 50, and 100 GPa). The simulation box ascribed to the reactants can be visualized as depicted in Figure 5a and that of the reaction products is reported in Figure 5b. This way, whereas the dynamics of the atomic species has been simulated by means of DFT-based molecular dynamics, the conversion of the reactants into the products has been accelerated by means of a relatively recent metadynamics scheme [74,75,89]. Thus, by adding a history-dependent (Gaussian) potential to the relevant molecules participating to the reaction under investigation, the free-energy landscape of this chemical transformation has been reconstructed by employing two path Collective Variables, namely s and z. Whereas s follows the progress of the reaction from the reactants (to which it is typically associated a value of s∼1) to the products (to which it is typically associated a value of s∼2), the variable z is a measure of the orthogonal distance of the observed chemical pathway with respect to the idealized one. Although these simulations were carried out only with the aim of observing the chemical transformation without necessarily reach the convergence of the free-energy surface, they allow for a rough estimate of the free-energy supply necessary to the initial system to escape from its local free-energy minimum and achieve the product state, as shown in Figure 6. It turns out that whereas under a modest (for an H 2 environment) pressure of 1 GPa, about 12 kcal·mol −1 have to be invested to transform glycine into ethanolamine (Figure 6a), only 3 kcal·mol −1 (Figure 6d) are required for running the same chemical reaction when an external pressure of 100 GPa-similar to those present during meteorite impact events-is imposed. Albeit such a difference may appear modest if compared to a two-orders of magnitude increase of the compression, it is worth noticing that according to the Eyring equation [126,127], a variation of the free-energy barrier height from 12 to 3 kcal mol −1 at the same temperature is equivalent to an enhancement of 10 9 of the kinetic rate constant of the investigated reaction, giving a better glance on the pressure-induced catalytic effects occurring, e.g., in violent impact events and under hydrothermal vent conditions.
Conclusions
In this minireview, we have reported a series of findings in the research fields of prebiotic chemistry and astrobiology. Additionally, to a general overview of some milestone historical and recent experiments, we have focused on advanced computational methods capable of efficiently solving the Schrödinger equation at different levels of approximation (i.e., density functional theory) and which are capable to realistically simulate the dynamical behavior of complex systems in condensed phase under the action of disparate energy sources available in prebiotic contexts.
In particular, a series of state-of-the-art numerical investigations elucidating the interplay between the quantum behavior of matter emerging at atomic and molecular scales with externally applied energy sources, such as intense electric fields mimicking lightning phenomena of primordial Earth-like atmospheres, violent shock impacts simulating the collision of meteorites with primitive planetary surfaces, and high-pressure/high-temperature regimes typical, e.g., of hydrothermal vents have been reported. Among them, ab initio molecular dynamics and enhanced sampling techniques (i.e., metadynamics) have been presented and exploited in the last decades to answer to old enigmas and to propose novel scenarios in the exponentially growing research field embedding the study of the chemical origins of life. | 10,544.8 | 2022-07-22T00:00:00.000 | [
"Computer Science",
"Chemistry",
"Biology",
"Physics"
] |
scCancer2: data-driven in-depth annotations of the tumor microenvironment at single-level resolution
Abstract Summary Single-cell RNA-seq (scRNA-seq) is a powerful technique for decoding the complex cellular compositions in the tumor microenvironment (TME). As previous studies have defined many meaningful cell subtypes in several tumor types, there is a great need to computationally transfer these labels to new datasets. Also, different studies used different approaches or criteria to define the cell subtypes for the same major cell lineages. The relationships between the cell subtypes defined in different studies should be carefully evaluated. In this updated package scCancer2, designed for integrative tumor scRNA-seq data analysis, we developed a supervised machine learning framework to annotate TME cells with annotated cell subtypes from 15 scRNA-seq datasets with 594 samples in total. Based on the trained classifiers, we quantitatively constructed the similarity maps between the cell subtypes defined in different references by testing on all the 15 datasets. Secondly, to improve the identification of malignant cells, we designed a classifier by integrating large-scale pan-cancer TCGA bulk gene expression datasets and scRNA-seq datasets (10 cancer types, 175 samples, 663 857 cells). This classifier shows robust performances when no internal confidential reference cells are available. Thirdly, scCancer2 integrated a module to process the spatial transcriptomic data and analyze the spatial features of TME. Availability and implementation The package and user documentation are available at http://lifeome.net/software/sccancer2/ and https://doi.org/10.5281/zenodo.10477296.
Introduction
The rapid development of single-cell RNA-sequencing (scRNA-seq) and spatial transcriptome (ST) technologies has promoted the accumulation of large-scale single-cell resolution omics data, enabling us to analyze the composition of tumor microenvironment (TME) more precisely and comprehensively.
Currently, researchers have defined many meaningful TME cell subtypes in various cancer types (Zheng et al. 2017, Guo et al. 2018, Zhang et al. 2018), which provides the opportunity to train supervised machine learning models and transfer these expert annotations to new datasets.Therefore, we upgraded our toolkit scCancer (Guo et al. 2021) to a new version (scCancer2) and improved the cell type annotation to more subtle subtypes.The updated version tended to preserve the labeling information or prior knowledge in the original references and assigned multiple labels to the user's data with a supervised machine learning framework.Compared with previous methods for cell type annotation (Kiselev et al. 2018, Alquicira-Hernandez et al. 2019, Aran et al. 2019, de Kanter et al. 2019, Pliner et al. 2019, Tan and Cahan 2019, Li et al. 2020, Galdos et al. 2022), we focused on the scenarios of TME and provided a set of classifiers for annotating the cell types and subtypes defined in different references.As the accumulation of scRNA-seq based TME studies, there is a common need to examine the differences and similarities of the cell subtypes defined in different references.So, based on the trained classifiers, we established a cell subtype similarity map across multiple references by comparing the predicted labels on large-scale tumor single-cell collections (15 datasets, 594 samples, and 1 213 469 cells; 5 training datasets for T cell subtypes, 3 for B cells, 6 for myeloid cells, 4 for endothelial cells, and 4 for fibroblasts).The generated similarity map is a meaningful source to summarize and compare the abundant knowledge from different datasets.
At the single-cell level, in addition to cell subtype annotation, it is also crucial to identify the malignant cells in TME.Currently, the methods based on copy number variations (CNV) (Patel et al. 2014, Gao et al. 2021, Guo et al. 2021) have been applied to the malignancy annotation in various cancer types.However, when no internal confidential reference cells are available, CNV-based methods frequently get unstable results.To be compatible with these special scenarios, we developed an additional data-driven method.Firstly, a reference dataset combining scRNA-seq dataset (663 857 cells in 175 samples) and bulk RNA-seq data (7012 TCGA samples) was established across multiple cancer types and then an XGBoost (Chen and Guestrin 2016) based classifier was trained to identify malignant cells with high generalization ability.Besides, the classifier achieved higher computational efficiency and lower memory burden, which is suitable for processing large-scale datasets.To demonstrate the robustness of this module in scCancer2, we categorized the test samples into four groups, including tumor samples with the bimodal or unimodal distribution of malignancy score, normal samples, and organoid samples.Extensive tests proved that it can be a great supplement to CNV-based methods.
Finally, the spatial dimension is also highly significant in characterizing the TME.To systematically dissect the TME spatial features, we constructed an automated ST analysis module, which includes three analytical perspectives: spatial interaction, spatial heterogeneity, and spatial structure.By applying it to 60 samples from various cancer types, we demonstrated its good performance.
Overall, taking advantage of the accumulated scRNA-seq data of cancer clinical samples, the in-depth expert annotations for TME in each study, and the new spatial information brought by the ST technology, scCancer2 improved its functionalities for automatically dissecting the complex TME features.The performances of scCancer2 have been extensively tested on a large amount of data across various cancer types.
TME cell subtype annotation
2.1.1 Cell subtype annotation method within a single dataset Firstly, we divided the dataset into training and validation sets in a 4:1 ratio (5-fold cross-validation, stratified sampling by cell subtype).Given a training expression matrix X of cell subtype labels L and the marker genes of every subtype G prior (or differentially expressed genes of clusters), we first selected representative cells to construct core training dataset according to the marker score generated from the Garnett (Pliner et al. 2019) method.The "Aggregate Marker Score" function evaluated the expression of important features within cells, and we selected samples C that express specific cell type features as core training dataset.Then, the Entropy test (Li et al. 2020) or Highly Regional Genes (Wu et al. 2022a) was used for feature selection.The genes G used for training were the union of input markers G prior and genes selected from the scRNA-seq expression matrix.
Therefore, we obtained an expression matrix X C;G for model training.We trained a multinomial model in which every parameter represents the expression probability P of a corresponding gene.For the validation set, we also evaluated the expression of marker genes.We labeled a cell as "unknown" when it did not express any marker gene in the prior information, which means that for every possible cell subtype, the "Aggregate Marker Score" function outputs a low marker score.
Then, we assigned cell subtypes to other unlabeled cells with maximum likelihood estimation (MLE).P was weighted by the marker score mentioned above.For every possible cell subtype j, gene i, and gene expression vector x, we assigned a cell label with MLE as follows: We evaluated the performance of our model through 5-fold cross-validation on 15 published datasets, using the mean and variance of classification accuracy and kappa index as indicators.We finally obtained 22 cell subtype templates across five major cell types in total (Supplementary Table S2).In addition, we compared scCancer2 with scCancer (Guo et al. 2021), Scibet (Li et al. 2020) (Kiselev et al. 2018), SingleCellNet (Tan and Cahan 2019), and XGBoost (Chen and Guestrin 2016) on six datasets with five major cell types and three sequencing technologies.
Multi-label annotation structure
For the newly input dataset D, we first annotated the cell types with scCancer (Guo et al. 2021) and split the dataset based on major cell types.For every subset D sub , we traversed all corresponding cell subtype templates and annotated cell subtypes with the above pipeline iteratively.For each reference dataset, as we have trained five "weak" models during cross-validation, we integrated the prediction of the models through voting.Cells without any predicted label with a vote count >2 (! 3) were also labeled as "unknown."The output is summarized in Supplementary Table S3.
Similarity map of different subtype labels
Defining a set C ij as cells assigned to the jth subtype in the ith group of labels, namely, the ith dataset, and the similarity matrix as S, the similarity of every two labels was calculated as follows: where the index is the position of a subtype after flattening all the subtype labels into a vector, and the dimension of matrix S is the length of the label vector, namely, the total number of subtype labels.Finally, we visualized the similarity matrix by heatmap, hierarchical clustering tree, and multi-dimensional scaling in R packages.Cross-annotation process within the training set is summarized in Algorithm 1.Only the requirement changes for the new test dataset: 1 test dataset, N annotation models, and N groups of labels.
Data collection, preprocessing, and annotation
We first collected 49 tumor samples sequenced by 10X Genomics with no previous malignant annotation.We calculated the copy number variation score of tumor samples through the inferCNV (Patel et al. 2014) and plot the distribution of malignancy scores.We retained 18 samples with an obvious bimodal distribution.
Although inferCNV has been proven to be reliable when the malignancy score follows a bimodal distribution, we found that epithelial cells were highly correlated with malignancy in the labeled results.To prevent the model from regarding the features of epithelial cells as malignant, we collected nine normal samples with more normal epithelial cells.Additionally, we downloaded and integrated published datasets from the TISCH2 database (Han et al. 2022) as a supplement.(Supplementary Table S4) To integrate samples from different sources, we preprocessed the data through scanpy (Wolf et al. 2018) and concatenated the expression matrices based on common genes.The combined scRNA-seq dataset was an expression matrix with 466 468 cells and 7749 genes.
Feature selection combining bulk RNA-seq and scRNA-seq
We first collected bulk RNA-seq data of 14 cancer types from TCGA.Then we pre-processed and integrated the samples based on common genes, obtaining an expression matrix with 7012 samples and 31 531 genes (Supplementary Table S5).
We calculated the top 5000 differentially expressed genes between malignant samples and normal samples across 14 cancer types using the edgeR (Robinson et al. 2010) package.From these 14 groups of differentially expressed genes, we identified a total of 1997 high-frequency genes that appeared >8 times, of which 1069 appeared in the integrated single-cell dataset, named as G bulkDE .
As for scRNA-seq data, we selected 2000 genes with the highest expression variance as G scHV .The features selected from scRNA-seq data G scRNA for model training were the union of G bulkDE and G scHV .Finally, the obtained reference dataset was an expression matrix with 466 468 cells and 2756 genes.
Model training and performance evaluation
We divided the training set and validation set by sample sources (Supplementary Table S4) and trained an XGBoost model with a binary logistic objective function.Confusion matrix, prediction accuracy, and AUC score were calculated through scikit-learn (Pedregosa et al. 2011) for quantitative representation, and UMAP was applied to qualitatively demonstrate the aggregation of labels on the 2D space.
We analyzed the performance of scCancer2 under various scenarios: tumor samples with either bimodal or unimodal distribution of malignancy scores, normal samples, and organoid samples.Additionally, we evaluated the generalization ability of scCancer2 by changing the cancer types in the training set or testing it on new cancer types.
Data collection
The test datasets for the spatial module in scCancer2 were obtained from previous studies and 10X Genomics.We collected 60 samples across nine cancer types in total (Supplementary Table S6).
Spatial interaction analysis
First, we automatically identified boundary regions between two clusters.All spots belonging to certain two clusters are extracted.A spot of one cluster is considered to be in the boundary area of these two clusters if there are spots of the other cluster in its six neighbors.Every spot in these two clusters is examined.We identify numerous boundary regions comprising connected regions formed by all boundary points.Furthermore, boundary regions with fewer than 20 spots were removed to provide sufficient data for P-value calculations.
Then we evaluated the interaction strength and P-value at the boundary regions.We extracted expressions of the approximately 1400 known L-R pairs from the CellPhoneDB dataset (Efremova et al. 2020).The L-R interaction strength between two clusters is defined as the average expression of the ligand and receptor in each cluster, respectively.To identify significant interactions, we randomly disrupted the labels of spots 1000 times.Each time, we calculated the strength of every interaction pair within each group using the new labels, forming a null distribution consisting of 1000 sampling points for each pair.P-values were obtained by calculating the proportion of the strengths which are more than the actual strength.We then filtered out weak L-R pairs based on their strengths and P-values, which were calculated from the null distribution mentioned above.Finally, we visualized these significant interactions between clusters through a scatter plot.
Phenotypic heterogeneity analysis
We mainly considered 14 phenotypic characteristics from CancerSEA (Yuan et al. 2019).The average expression of signature genes and the "AddModuleScore" function in Seurat are both optional to calculate phenotypic scores.To demonstrate the phenotypic differences in different regions of tumor tissue, we manually divided the HCC-1L sample into three regions (tumor, capsule, and normal) based on the clustering results and the H&E image.We derived the phenotypic scores of three regions and used the Wilcox test to determine the significance of pairwise differences.
Spatial structure detection
Firstly, we scored the enrichment of corresponding gene signatures with average expression or "AddModuleScore" in Seurat.The Global Moran's I (GMI) was then calculated based on signature scores to infer the existence of corresponding spatial structures.GMI was calculated as follows, where x Algorithm 1. Multi-model and multi-dataset cross annotation Require: N reference datasets (D); N groups of labels (L); N annotation models (M).
for i ¼ 1 to N do for j ¼ 1 to N do Test model M j on dataset D i , assign cell subtypes.end for Define: L im m th subtype in L i ; L jn n th subtype in L j Obtain: C im ¼ fcelljcells assigned to L im g; C jn ¼ fcelljcells assigned to L JN g Jacobian similarity matrix: end for Integrate N matrixes with mean and variation.J MeanðJ 1 ; J 2 ; . . .
scCancer2
is the signature score of each spot, n is the number of spots, and w ij is the weight of spot i and spot j, d ij is the distance of spot i and spot j, minðdÞ is the minimum distance of each pair of spots.
A positive GMI value indicates a tendency toward clustering, and the closer GMI is to 1, the more significant the clustering is.It is assumed as a default that samples exhibiting a GMI >0.5 possess distinct spatial structures.Then we calculated the Local Moran's I (LMI) (Anselin 1995) to determine the position of spatial structures: A positive value for LMI indicates that the score has neighboring scores with similarly high or low attribute values.Since LMI is a relative measure, we calculated the z-scores to interpret it and obtained P-values: Furthermore, we used two strategies to overcome the high false positive rate of the LMI-based method.The first one is P-value correction.FDR correction was implemented to adjust the P-values obtained from the z-scores.A spot is determined as part of a specific structure if its: (i) adjusted P-value < .05;and (ii) signature score exceeds the median.The other one is based on z-scores.We recalculated the z-scores of signature scores in areas with P-values <.05 and subsequently obtained the corresponding new P-values.Spots with new Pvalues <.05 were considered regions containing spatial structures.
Analysis of tertiary lymphoid structures and tumor capsules
We collected the TLS-50 signature from Wu et al. (2021a) and calculated the average expression of these 50 signatures in each spot as TLS scores.TLS areas were obtained using the method mentioned in 4.3 and we removed false positive results based on z-scores.Next, we combined the tumor samples from Wu et al. (2021aWu et al. ( , b, 2022b) ) with common genes and obtained a matrix containing 821 TLS spots.Then, we analyzed the cellular composition of the detected TLS spots with "FindTransferAnchors" and "TransferData" in Seurat (Hao et al. 2021).We chose a multi-modal PBMC dataset in Hao et al. (2021) as the reference and mapped the cell annotations to the combined TLS spots.To save memory, we extracted 50% of the original dataset by cell subtypes with the stratified sampling method.
To obtain the signatures of tumor capsules, we clustered the "HCC-1L" sample in Wu et al. (2021a), and used "FindMarkers" in Seurat to obtain the top 50 differentially expressed genes of the tumor capsule cluster (Capsule-50).Capsule scores were calculated using the average expression of Capsule-50.Tumor capsule areas were obtained using the method mentioned in Section 2.3 and we removed false positive results based on adjusted P-values.
Overview of scCancer2
We implemented three new modules in scCancer2 (Fig. 1).Firstly, scCancer2 automatically performed cell subtype classification with built-in models and output multi-label annotations.To visualize the relationship of labels quantitatively, a similarity map of cell subtypes from different atlases was calculated for every major cell type.Secondly, scCancer2 identified malignant cells based on bulk RNA-seq and scRNA-seq with the XGBoost.Our method can be directly applied to various types of samples without relying on the inference of CNVs.Thirdly, scCancer2 performed systematical analyses of spatial transcriptomics data for cancer research, mainly consisting of basic analysis and TME spatial analysis.The information on packages and methods newly integrated into scCancer2 were summarized in Supplementary Table S1.
scCancer2 upgrades TME cell type annotation to the subtype level
Advances in scRNA-seq enabled researchers to provide accurate depictions of TME.The cell annotation of recently published datasets was usually precise to the cell subtype level, which can serve as reference datasets for cell subtype classification.We collected and preprocessed a total of 15 published datasets across 17 human cancer types and three sequencing technologies (10X Genomics, Smart-seq2, InDrop).There are 594 samples with 1 213 469 cells in total (Table 1).
Here, we implemented a rapid, supervised annotation framework integrating prior information.The training step for every single dataset mainly includes the following steps.Firstly, select cells by the expression of marker genes to construct high-quality training sets (Pliner et al. 2019).Secondly, select genes according to the statistic entropy (Li et al. 2020).Thirdly, train a multinomial model for each cell subtype and estimate the expression probability of the selected genes by maximum likelihood estimation (Section 2.1).Then, for each major cell type, we trained a series of subtype annotation models based on multiple reference datasets (Section 2.1).
By applying to 594 samples in Table 1, we proved that scCancer2 had great generalization ability on TME datasets of five major cell types (Fig. 2A), including T cells, B cells, myeloid cells, endothelial cells, and fibroblasts.Moreover, we compared the framework with other benchmarks including scPred (Alquicira-Hernandez et al. the results in Fig. 2B further indicate the reliability of our framework.Taking 43 817 immune cells in the colorectal cancer (CRC) dataset GSE146771 (Zhang et al. 2020) as an example, cell types were annotated hierarchically.We annotated major immune cell types through OCLR in scCancer (Guo et al. 2021).Then, for each major cell type, we assigned multiple sets of cell subtype labels based on multiple trained models in scCancer2 and visualized them respectively (Fig. 2C).Correspondence between the suffix number of labels and reference datasets can be found in Supplementary Table S2.
In summary, we fully preserved the original annotation from different cell atlases and transferred them to new datasets with a supervised machine-learning framework.
scCancer2 quantitatively evaluates the similarity of cell subtypes across datasets
Through the above process, scCancer2 outputs multi-label annotations where each cell is assigned multiple possible labels, retaining the original annotation in cell atlases.However, the cellular composition of different cancer types b In the column "Cell type," "All" represents five major cell types: immune cells (T cells, B cells, and myeloid cells) and stromal cells (endothelial cells and fibroblasts).Epithelial cells are excluded.
c GSE156728 was treated as the query dataset.The patient samples were not used for model training.
scCancer2 varied greatly.Considerable variations exist in expert annotations at the subtype level across different datasets.Therefore, depicting the relationship between cell subtypes defined in different studies is urgently needed.We carried out a method to integrate the cell annotations from different experts.We annotated cell subtypes across datasets with those built-in models respectively and each label was assigned to several cells.Then, we extracted "labelbarcode" sets from the results and derived the Jacobian similarity.Finally, the similarity matrix was visualized by heatmap, hierarchical clustering, and multi-dimensional scaling (Section 2.1).By observing the aggregation of labels in 2D space, we can understand the complicated relationship between different cell subtypes.
Take CRC dataset (Zhang et al. 2020) as an example, cell annotation was conducted based on multiple trained models, and the cross-dataset label relationships of B cells were calculated, as shown in Fig. 3A.It was observed that the aggregation pattern of B cell labels was consistent with expectations: two naive B cell labels, three plasma cell labels, and two memory B cell labels showed a high degree of similarity.In particular, the "Lymphoid-B" in hepatocellular carcinoma (HCC) was expected to be the union of naı ¨ve B cells and memory B cells in non-small-cell lung cancer (NSCLC) and breast cancer (BRCA).Respectively, its similarity with the naive B cells was 0.10 and 0.17, and that with the memory B cells was 0.72 and 0.67 (Fig. 3A).The results indicate the inclusion relationship between these groups of labels, which is consistent with the biological meaning of the labels.In the right panel of Fig. 3A, subtypes of B cells can be clearly divided into three categories based on spatial distance.The "Lymphoid-B" label was located between the naive B category and the memory B category and closer to the latter.
Similarly, the similarity maps of T cells and myeloid cells were generated.We took a portion of the complete similarity graph in Fig. 3B and C. Firstly, cell subtypes from different studies of the same group of researchers were often named in a similar pattern.They were almost all clustered into the same category in the similarity map.For example, "CD4_C12-CTLA4," "C08_CD4-CTLA4," and "CD4_C9-CTLA4" in Fig. 3B.Except for identical labels mentioned above, cell subtypes named by different researchers with similar functions also showed high similarity in the heatmap.For example, "Mast," "Mast cells activated," "Mast_KIT," and "MAST-TPSAB1" in Fig. 3C.Finally, we integrated the original annotations of the pan-cancer query dataset (Zheng et al. 2021) into the similarity graph.Observing the results in Supplementary Fig. S1, we found that the annotations of scCancer2 had a clear correspondence with the original expert annotations.Almost all newly assigned labels aggregated in the heatmap had corresponding original annotations.These results indicate that scCancer2 successfully depicted the cross-dataset relationship of complicated immune cell subtypes with similar biological functions (See Supplementary Figs S2-S6 for all results and the corresponding studies).
In summary, we quantitatively depicted the connections between cell subtypes from different cell atlases with the label similarity map.We extracted and summarized the abundant information in the multi-label annotation, which can be an important reference and literature indexing tool.
scCancer2 identifies malignant cells across multiple cancer types without internal references
Determining the malignancy of the cells is an important and challenging task, as it holds key significance not only for the subsequent analysis of tumor heterogeneity and microenvironmental characteristics but also for investigating the mechanisms of tumor occurrence and development.In scCancer (Guo et al. 2021), the cell malignancy scores were estimated based on inferCNV (Patel et al. 2014), and the malignancy labels were determined according to the bimodality observation of the scores after neighbor smoothing.However, CNV is not always necessary for cell malignancy.When there is no internal confidential reference cell, the CNV-based method becomes unreliable.For example, when the proportion of tumor and normal cells in a sample is seriously imbalanced, the malignancy scores usually follow a unimodal distribution.
To improve the identification of malignant cells, we designed an XGBoost classifier by integrating the scRNA-seq dataset and the TCGA dataset.It is a supplement to the CNVbased method.The pipeline includes reference dataset annotation based on the bimodal distribution of malignancy score, feature selection based on pan-cancer malignant features extracted from the TCGA dataset, training an XGBoost classifier, and performance evaluation (Section 2.2).We collected a total of 663 857 cells in 175 samples (Supplementary Table S4) and constructed a high-quality single-cell reference dataset with 466 468 cells and 2756 genes.To ensure the reliability of the classifier, it is vital to balance the proportion of normal and malignant cells.Moreover, multiple cancer types were required in the reference and query sets for better pan-cancer generalization ability.Consequently, we divided the reference and query sets by sample sources instead of random sampling (Supplementary Table S4).
Then, ElasticNet, XGBoost, and Neural Network were applied respectively to identify malignant cells in scRNA-seq data.The results show that XGBoost achieved better performance compared with ElasticNet and Neural Network (Fig. 4A and Supplementary Fig. S7).Moreover, the XGBoost significantly improves computational efficiency and reduces memory usage compared with the CNV-based method, especially when processing integrated large-scale datasets (Peng et al. 2019, Zhang et al. 2020).By observing the significant genes selected by the XGBoost during the training process, we judged the consistency between biological prior knowledge and the data-driven method.In particular, we noted that cancer-related genes including METTL1, ALKBH2, PSPH, PTPRC, and TRIP13 contributed the most to the identification, which indicated the great biological interpretability of our model (Fig. 4B).
To evaluate the robustness of scCancer2, we applied it to some test samples in four distinct scenarios.Firstly, when the malignancy scores of tumor samples show a significant bimodal distribution, the annotation results of scCancer2 (XGBoost) on malignant cells are highly consistent with the annotation results of scCancer (Guo et al. 2021) (CNV-based method) (Fig. 4C and Supplementary Fig. S8).Secondly, when the proportion of malignant cells and normal cells is imbalanced, such as the normal samples and the tumor organoid samples, scCancer2 can accurately identify normal cells (Fig. 4D and Supplementary Fig. S9) and malignant cells (Fig. 4E and Supplementary Fig. S9), while the CNV-based method is faced with limitations (Supplementary Fig. S10).For the cancer organoid samples, the annotations can provide valuable information to evaluate the quality of in vitro culture.Furthermore, the CNV-based method is unreliable for a minority of solid tumor samples.When the malignancy score of the sample follows a unimodal distribution, scCancer2 can also identify malignant cells (Fig. 4F).We found that the identified malignant clusters were highly consistent with epithelial cells predicted by scCancer (Guo et al. 2021) in solid tumor samples (Supplementary Fig. S11).Lastly, scCancer2 can effectively annotate the malignant cells in cancer types that do not exist in the training set.In the experiment, we moved the lung cancer (LC) samples from the training set into the validation set.Then, we trained a new classifier and directly predicted the malignant cells in LC samples.A similar experiment was conducted on samples of gastric cancer.We found that scCancer2 still performed well on these samples (Fig. 4G and Supplementary Fig. S12).
In summary, the results indicate that scCancer2 effectively extracted the transcriptome characteristics of malignant cells in TME, obtaining great generalization ability across cancer types.It is a great supplement and improvement to the CNVbased method.
scCancer2 analyzes TME from a spatial perspective
Sequencing-based spatial transcriptomics has helped researchers to achieve impressive results in exploring the spatial heterogeneity of TME (Elhanani et al. 2023).scCancer2 provides a highly automated module for tumor ST analysis, which mainly consists of two parts.The first part focuses on quality control (QC), statistical analyses, and basic downstream analyses.The second part analyzes TME from three different angles.
We first utilized spatial information to perform morphology QC to filter small isolated tissue areas, which often do not contribute to the analysis.Then, we visualized unique molecular identifier (UMI) counts and the detected gene numbers to reflect the characteristics of the tissue, and filtered the spots with extremely low UMI counts or gene numbers.We provided statistical results of UMI counts on 60 tumor samples from nine cancer types (Supplementary Fig. S13 and Supplementary Table S6).Furthermore, we performed statistical analysis of gene expression proportion, especially for mitochondrial genes and ribosomal genes (Supplementary Fig. S14).After QC, we performed basic downstream analyses including dimension reduction, clustering, differential expression analysis based on Seurat (Butler et al. 2018), gene expression program analysis based on non-negative matrix factorization (Lee and Seung 1999), cell type and cell malignancy scoring based on marker genes (Supplementary Fig. S15).
TME spatial analysis module includes interaction identification, heterogeneity characterization, and spatial structure detection.The interaction of different regions is crucial to understanding tumor behaviors, such as growth, progression, drug response, and therapeutic effect.For instance, analyzing the tumor boundary containing malignant and non-malignant spots contributed to identifying potential therapeutic targets (Xun et al. 2023).Therefore, scCancer2 identified the boundary spots between clusters automatically and defined the ligand-receptor (L-R) interaction strength between two clusters as the mean of the average expression of the L-R pairs from the CellPhoneDB dataset (Efremova et al. 2020).
As seen in Fig. 5A, scCancer2 marked the boundary between the fibrous capsule and the tumor.The results of L-R interaction indicate that the extracellular matrix receptor interaction (collagen-related L-R pairs) was strong at the boundary.Besides, several disease-related L-R pairs such as CD74-APP/ COPA/MIF and CXCL12-ACKR3/DPP4 were also found at the boundary (Garc ıa-Cuesta et al. 2019, Chen et al. 2021, Wu et al. 2021c).S4 for the correspondence between the original name of the datasets and their naming in figures.
scCancer2
As for spatial heterogeneity, in addition to the scoring of cell type and malignancy mentioned above (Supplementary Fig. S15), we considered 14 phenotypes from CancerSEA (Yuan et al. 2019) to further explore tumor heterogeneity and the characteristics of TME.Previous studies have demonstrated the association between TME and these phenotypes, such as angiogenesis, hypoxia, and inflammation (De Palma et al. 2017, Jin and Jin 2020, Singleton et al. 2021, Denk and Greten 2022).Take HCC-1L as an example, we found that there were significant differences in phenotypic characteristics between tumor regions, capsules, and normal regions (Fig. 5B).
Spatial structures in TME are crucial to the growth and prognosis of tumors, such as tertiary lymphoid structures (TLSs) and capsules (Dieu-Nosjean et al. 2014, Zhu et al. 2019).We developed a method to automatically identify spatial structures (Section 2.3) and applied it to recognize TLSs and capsules.It achieved good performance on various cancer types, including HCC, BRCA, CRC, etc. (Fig. 5C and Supplementary Fig. S16).Furthermore, we also found that S6. (E) Major cell lineages and cell subtypes enriched in TLSs.The boxplot contained all major and minor cell types which predicted to be significantly positive in the TLS spots.The cell type labels originate from Hao et al. (2021).
there was a significant correlation between spatial structures and transcriptome characteristics.The detected distribution of TLSs was correlated with the distribution of inflammatory features in 60 tumor samples across nine cancer types (Fig. 5D).
Finally, we extended the module of TME cell subtype annotation to the spatial level by quantitatively evaluating the cellular composition of TLSs.We detected TLSs from tumor samples (BRCA, CRC, and HCC) (Wu et al. 2021a(Wu et al. , b, 2022b) ) and transferred the annotations of cell subtypes from the reference dataset (Hao et al. 2021) to all the detected TLS spots (Section 2.3).As shown in the first panel of Fig. 5E, we detected four highly enriched immune cell types in the spots of TLSs, including B cells, dendritic cells, monocytes, and T cells, which is consistent with previous studies (Saute `s-Fridman et al. 2019, Kang et al. 2021).At the cell subtype level, as shown in the second panel of Fig. 5E, plasmablasts, memory B cells, and dendritic cells are the most abundant in TLSs.Common subtypes of CD4 T cells (TCM and TEM) can also be detected.
In summary, we implemented a new module to analyze TME based on spatial transcriptomics data.This module is highly systematic and automated.It enables us to explore the spatial characteristics of TME from multiple perspectives comprehensively.
Discussion
By leveraging machine learning and integrating multiple analysis modules, we provided a valuable tool scCancer2 for researchers to gain a more comprehensive understanding of the TME at the single-cell level.Our analysis consists of three different modules.Firstly, we trained a series of machinelearning models on scRNA-seq data for cell subtype annotation and quantitatively evaluated the similarity of labels originating from different datasets.Secondly, we constructed a reference dataset based on bulk RNA-seq and scRNA-seq data and identified the malignant cells with the XGBoost classifier.Thirdly, we integrated a module of ST analysis for a multi-dimensional view of TME.
For the cell subtype annotation module, we tested classic machine learning methods (e.g.SVM and XGBoost) and specialized algorithms for scRNA-seq on massive data.The evaluation of classification algorithms requires consideration of multiple factors: classification performance, computational efficiency, model complexity, and biological interpretability.In terms of accuracy within a single dataset, scCancer2 is close to SVM in scPred (Alquicira-Hernandez et al. 2019).However, scCancer2 can generate lightweight models and rapidly assign multiple sets of labels to new input data.Meanwhile, we found that XGBoost achieved better performance than other algorithms on classification tasks (Supplementary Fig. S17), but its generalization ability was worse than scCancer2 on cross-dataset annotation, making it hard to discover the relationship of labels originating from different datasets (Supplementary Figs S18 and S19).
The results of cross-dataset annotation indicate the generalization ability of scCancer2 on immune cells and endothelial cells.Cell subtypes with similar functions showed high similarity in the heatmap (Supplementary Figs S2-S5).The results in fibroblasts are relatively ambiguous (Supplementary Fig. S6) due to the heterogeneity of fibroblasts (Kanzaki andPietras 2020, Lavie et al. 2022).scCancer2 provides a convenient access for users to not only extract specific cell subtypes from the dataset but also index similar subtypes and relevant references.We hope to propose reliable indicators for quantitatively evaluating the cross-dataset performance of algorithms based on similarity maps.
For the malignant cell identification module, machine learning-based methods in scCancer2 can be a great supplement to the CNV-based method.Firstly, scCancer2 extracts the transcriptome characteristics of malignant cells and performs well when the CNV-based method fails to work.Secondly, when directly processing large-scale published datasets instead of individual samples, the time cost and memory burden of the CNV-based method are high, while the trained model in scCancer2 can rapidly identify malignant cells in large-scale datasets.
For the ST analysis module, exploring tumor heterogeneity from a spatial perspective is very significant.As we have already analyzed the composition of TLSs at the cell subtype level, we hope to benchmark more deconvolution and mapping strategies for the integration of scRNA-seq and spatial transcriptomics data (Longo et al. 2021).Besides, transcriptome characteristics extracted from bulk RNA-seq and scRNA-seq data such as malignancy features can also be utilized for spatial heterogeneity analysis.
In the future, we will pay more attention to the scalability of scCancer2.In recent studies, there are several newly discovered or rare cell subtypes (Hanada et al. 2022, Nalio Ramos et al. 2022).Detecting them in the existing samples is an important issue.We can treat these cell subtypes as positively labeled queries and solve this problem with novelty detection methods such as one-class models (Perera et al. 2021).Moreover, adding new subtypes to existing label networks rapidly (without cross-dataset annotation) is challenging.Comodeling the expression matrix and the label similarity matrix as a label network by graph models cannot only visualize the relationship more clearly, but also provide access for new nodes by graph embedding and link prediction.2021YFF1200901]; the National Natural Science Foundation of China [62133006,92268104] 2019), one-class logistic regression (OCLR) (SOKOLOV et al. 2016), multinomial, Scibet (Li et al. 2020), CHETAH (de Kanter et al. 2019), scmap (Kiselev et al. 2018), and SingleCellNet (Tan and Cahan 2019) on six datasets containing five major cell types, and three sequencing technologies.Considering the prediction accuracy, computational efficiency, and model complexity,
Figure 2 .
Figure 2. scCancer2 annotates cell subtypes of TME.(A) Performance evaluation of scCancer2 on cell subtype annotation task.The accuracy and kappa index were obtained by 5-fold cross-validation (stratified sampling by cell subtype).(B) Comparison of different machine learning methods on cell subtype annotation task.The results were obtained by 5-fold cross-validation.(C) Multi label annotation results on CRC example (immune cells).Rough cell labels (T cell, myeloid cell, B cell) were assigned to CRC example through OCLR in scCancer.The dataset was then divided by major cell types.Cell subtype annotation was achieved by scCancer2, forming a multi-model hierarchical cell type annotation.In each subfigure, the numbers after the label represent the literature serial number corresponding to the major cell type.
Figure 3 .
Figure 3. Generation of cross-dataset cell subtype similarity by scCancer2.(A) The cross-dataset similarity map of lymphoid B cell subtypes on CRCexample.The left panel is the heatmap of the similarity matrix.The right panel is the multi-dimensional scaling plot of the matrix, demonstrating the similarity of different labels.Color represents the source of subtype labels.(B) The cross-dataset similarity map of lymphoid T cell subtypes on CRCexample.(C) The cross-dataset similarity map of myeloid cell subtypes on CRC-example.* In each subfigure, the numbers after the label represent the literature serial number corresponding to the major cell lineage.The horizontal and vertical coordinates of the heatmap are the same.Each point in the matrix represents the Jaccord similarity between the horizontal axis label and the vertical axis label, and the diagonal element is 1. (B) and (C) are part of the similarity map (See Supplementary Figs S2-S6 and Supplementary TableS2for full version and references). scCancer2
Figure 4 .
Figure 4.The results of malignant cell identification by scCancer2.(A) Performance comparison of different machine learning methods across multiple cancer types on malignant cell identification task.(B) Important genes for cell malignancy.The histogram shows the feature importance ranking calculated by the XGBoost model during the training process.(C) Case 1: Test scCancer2 on tumor samples with bimodal distribution of malignancy score.For each subfigure (cancer type), the left panel shows the annotation results of scCancer2 (XGBoost), while the right panel shows the annotation results of scCancer (CNV-based method).(D) Case 2: Test scCancer2 on normal samples.The UMAP-plot shows the annotation results of scCancer2 on three samples without malignant cells.(E) Case 3: Test scCancer2 on organoid samples.The UMAP-plot shows the annotation results of scCancer2 on three in vitro cholangiocarcinoma organoid samples.(F) Case 4: Test scCancer2 on tumor samples with unimodal distribution of malignancy score.For each subfigure, the left panel shows the distribution of malignancy score, while the right panel shows the annotation results of scCancer2.(G) Case 5: Test scCancer2 on new cancer types.The subfigures compare the prediction results and ground truth on three lung cancer samples.See Supplementary TableS4for the correspondence between the original name of the datasets and their naming in figures.
Figure 5 .
Figure 5.The results of TME spatial analysis by scCancer2 on tumor samples.(A) Boundary spots of neighboring clusters (left panel) and the ligand-receptor interaction strength between two clusters (right panel) in HCC-1L.(B) Boxplot showing the scores of 14 phenotypic characteristics in tumor regions, capsules, and normal regions in HCC-1L.The scores of each characteristic were scaled between 0 and 1.The P-value between the two groups is calculated through the Wilcoxon test.*P < .05,**P < .01,***P < .001,and ****P < .0001.EMT, epithelial-mesenchymal transition.(C) Tertiary lymphoid structures identified by scCancer2 and their corresponding H&E images.HCC: "HCC-1L" in Wu et al. (2021a); BRCA: "V1_Breast_Cancer_Block_A_Section_2" in 10x Genomics; CRC: "ST-colon3" in Wu et al. (2022b).(D) Correlation analysis between spatial structure (TLSs) and spatial characteristic (inflammation) on samples across various cancer types in Supplementary TableS6.(E) Major cell lineages and cell subtypes enriched in TLSs.The boxplot contained all major and minor cell types which predicted to be significantly positive in the TLS spots.The cell type labels originate fromHao et al. (2021). | 9,095.8 | 2023-08-23T00:00:00.000 | [
"Computer Science",
"Medicine",
"Biology"
] |
Uncovering population structure in the Humboldt penguin (Spheniscus humboldti) along the Pacific coast at South America
The upwelling hypothesis has been proposed to explain reduced or lack of population structure in seabird species specialized in food resources available at cold-water upwellings. However, population genetic structure may be challenging to detect in species with large population sizes, since variation in allele frequencies are more robust under genetic drift. High gene flow among populations, that can be constant or pulses of migration in a short period, may also decrease power of algorithms to detect genetic structure. Penguin species usually have large population sizes, high migratory ability but philopatric behavior, and recent investigations debate the existence of subtle population structure for some species not detected before. Previous study on Humboldt penguins found lack of population genetic structure for colonies of Punta San Juan and from South Chile. Here, we used mtDNA and nuclear markers (10 microsatellites and RAG1 intron) to evaluate population structure for 11 main breeding colonies of Humboldt penguins, covering the whole spatial distribution of this species. Although mtDNA failed to detect population structure, microsatellite loci and nuclear intron detected population structure along its latitudinal distribution. Microsatellite showed significant Rst values between most of pairwise locations (44 of 56 locations, Rst = 0.003 to 0.081) and 86% of individuals were assigned to their sampled colony, suggesting philopatry. STRUCTURE detected three main genetic clusters according to geographical locations: i) Peru; ii) North of Chile; and iii) Central-South of Chile. The Humboldt penguin shows signal population expansion after the Last Glacial Maximum (LGM), suggesting that the genetic structure of the species is a result of population dynamics and foraging colder water upwelling that favor gene flow and phylopatric rate. Our findings thus highlight that variable markers and wide sampling along the species distribution are crucial to better understand genetic population structure in animals with high dispersal ability.
Introduction
In species with high dispersal ability and no geographical barriers in their distribution, it is expected found low genetic population structure. For instance, weak or no population genetic structure has been frequently recorded for seabird species along the Atlantic coast of South America (e.g. Kelp gull, Larus dominicanus [1,2]; Magellanic penguin, Spheniscus magellanicus [3]; South-American tern, Sterna hirundinacea [4]), along the Pacific coast of South America (e.g. Peruvian pelican, Pelecanus thagus [5]), and around Antarctica (Emperor penguin, Aptenodytes forsteri, [6,7]; Adélie penguin, Pygoscelis adelia [8]; P. antarticus Chinstrap penguin [9]). Therefore, relative importance of factors that influence the population structure of seabirds have been under debate, such as the presence of physical or non-physical barriers [10,11], the foraging ecology of the species [12], and/or their philopatric behavior [13].
The upwelling hypothesis has been proposed to explain reduced or lack of population genetic structure in seabird species specialized in food resources available at cold-water upwellings, which are regularly influenced by largescale climatic events [12]. For instance, the Southeast Pacific coast is characterized by the Humboldt Current System (HCS) of cool sea surface temperature and high biological productivity. In the HCS, the coastal upwellings provide more than 10% of global fish catch [14], however, these areas are not temporally continuous or spatially uniform: indeed, El Niño Southern Oscillation (ENSO) reduces the upwellings' intensity leading to the warming of surface waters, reducing productivity [15,16,17,18], and affecting the survival and dispersal of fishes, seabirds and marine mammals [19,20,21]. Thus, during El Niño events, adult seabirds disperse long distances to find new productive upwelling areas to forage and colonize new area, according to the availability of breeding grounds, increasing gene flow and consequently reducing population genetic structure. The weak population genetic structure and high genetic diversity of Humboldt Current endemic seabirds can be explained by the upwelling hypothesis [5], such as described for Peruvian pelicans, Peruvian boobies [12], and also for Humboldt penguins [22].
Another hypothesis to explain reduced population genetic structure is the effect of Pleistocene glaciations, which is frequently proposed for seabirds from South America [23] and from the North Hemisphere [24,25]. During the Last Glacial Maximum (LGM), the southern portion of the Pacific coast was covered by an ice sheet [26,27]. However, in Chile, the region between 33˚S and 46˚S was considered to have been climatically stable [28]. Thus, distinct climate conditions throughout the Pacific coast could play an important role in defining the demographic history of populations, affecting species distribution, and leaving a genetic signature on populations. Therefore, low genetic structure could reflect the signature of population expansion after the LGM associated with high population size (Ne), thus recent gene flow would not be easily detected.
Gene flow among populations derives from contemporary and historical factors. However, detecting population genetic structure in species with large effective population size is a challenge, since variation in allele frequencies is masked by genetic drift that is inversely proportional to population size [29]. There is also debate on the power of algorithms of clustering to detect genetic structure in species with large population [6,30,31,32]. For instance, Chinstrap penguins showed weak genetic population structure and a pattern of isolation by distance (IBD) when evaluating four breeding colonies [33] at southernmost Western Antarctic Peninsula (WAP), but no differentiation from the distant Bouvet Island and 11 WAP locations [9]. The number of molecular markers, loci, sample size, and the number of locations across the species distribution might be important to fully understand these patterns. Therefore, the detection of population genetic structure for seabirds from HCS may not only reject the upwelling hypothesis, but may propose new hypotheses to explain the new patterns of species across the region.
Penguins are monogamous seabirds, with intense biparental care, philopatric behavior, high capacity of dispersion, and specialist diet [34]. The Humboldt penguin (Spheniscus humboldti) is an HCS specialist, widely distributed along the Pacific coast of South America, from La Foca Island (05˚12'S; 81˚12'W) in Peru to Metalqui Island (42˚12'S; 74˚09'W) in Chile [35]. There are records that Humboldt penguins have been seen at Guafo Island (43˚36'S) in a mixed-species colony, however there is no report of breeding activity [36]. On the southern limit of its distribution, there is information about hybridization between Humboldt and Magellanic penguins in mixed-species colonies [37]. The current global population size estimation is ca. 32,000 to 36,982 breeding adults, and the Humboldt Penguin is listed as Vulnerable by International Union for Conservation of Nature (IUCN) due to population size reductions attributed to exploitation or habitat alteration, as well as the effect of ENSO events [38,39].
Migration rates among colonies are not well known, however there is evidence of individuals from Pan de Azucar colony migrating over 600 km, and birds from Puñihuil were found over 1,000 km northwards from their original colonies [40,41]. In addition, it was proposed that during ENSO events, Humboldt penguin abundance and distribution might have been shifted southward, causing the reduction of populations in the North (e.g. Punta San Juan, Peru) and an increase in some colonies in Chile, such as Chañaral Island [39]. Intense migration rate has been corroborated by low genetic structure estimated among some colonies of Humboldt penguins [22].
The present study aims to evaluate population genetic structure of the Humboldt penguin, testing upwelling and glaciation hypotheses to explain lack or reduced population structure as previously proposed for this species, and to investigate the potential contribution of choice of molecular markers for population genetic structure of the species along the HCS. To achieve these goals, we (1) characterize the genetic diversity and geographical structure across the entire species range, (2) determined the effects of historical climate changes over the species demography, and (3) evaluated if there is sex-biased philopatry and dispersal, bringing light to the questions about low structure recorded on several seabird´species.
Ethics statement
This study was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Animals at research of the National Council of Animal Experiments from Brazil and Bioethics Guideline from CONICYT (Comisíon Nacional de Investigacion Cientifica y Tecnologica) del Chile.
Sampling
Blood samples for genetic analysis were collected during penguin breeding seasons between 2005 and 2013. We collected a total of 487 samples from adult Humboldt penguins from 11 breeding colonies distributed along its entire distribution range in Peru and Chile (Fig 1). Penguins were captured quietly with a noose pole 1.5 m in length used to lead the penguins out of their nests, and they were hold manually. The heads of captured penguins were covered by one mask to reduce the visual stress. Approximately 500 μl (microliters) of blood samples were obtained from the internal metatarsal vein or the brachial vein, using a 23G needle and 3 mL syringe and stored in 96% ethanol P.A. for genetic analysis. To avoid re-sampling, penguins were marked temporarily with water-resistant color markers, with exception in Punta San Juan colony where penguins already had flipper bands. DNA was isolated for genetic analysis using standard phenol-chloroform extraction protocols followed by ethanol precipitation and DNA resuspension in sterile water [42].
Molecular sexing
Sex was determined by Polymerase chain reaction (PCR), using primer pairs P 2 and P 8 [43], prepared to 10 μL volume containing 50 ng of total DNA, 1X of Taq buffer, 200 μM of each dNTP, 3.5 mM MgCl 2 , 0.5 μM of each primer, and 0.5 U of Taq DNA polymerase. Amplifications were performed with an initial step at 94˚C for 4 min, 53-55˚C for 30 s, and 1 min at 72˚C, followed by 40 cycles of 30 s at 92˚C, 50˚C for 30 s, and 45 s at 72˚C, followed by a final extension of 7 min at 72˚C. The PCR amplifies regions of the CHD1 gene found on the sex chromosomes [43]. Gender identification was based on the number of bands for a given sample visualized on 3% agarose gel. Males have a single band that corresponds to the intron on Z chromosomes, whereas females have two bands, corresponding to introns on the ZW chromosomes that showed distinct size.
Microsatellite genotyping
We genotyped 13 microsatellite loci developed for Spheniscus [22,44,45]. PCR amplification and genotyping procedures were performed using fluorescently labeled primers (Thermofisher, São Paulo, Brazil). PCR were conducted separately for each locus in a 10 μL volume containing 20-40 ng of total DNA, 1X of Taq buffer, 200 μM of each dNTP, 0.5 μM of each primer, and 0.5 U of Taq DNA polymerase. Amplifications were performed with an initial step at 95˚C for 4 min and 37 cycles of 30 s at 94˚C, 30 s at annealing temperature according to each locus (S1 Table), and 1 min at 72˚C, followed by a final extension of 10 min at 72˚C. PCR products were diluted 5X (1:5). A volume of 1.5 μL diluted PCR from FAM labeled and 3.0 μL from HEX labeled was then suspended in 7.75 μL HiDi Formamide (Applied Biosystems, Foster City, CA) with 0.25 of GeneScan 500 ROX size standard, and analyzed on ABI 3500 (Applied Biosystems, Foster City, CA). The fragments were genotyped using GeneMapper v.4 software (Applied Biosystems, Foster City, CA).
Genotyping error or null alleles were tested for each colony using the program Micro-Checker [46]. Hardy-Weinberg and linkage equilibrium for each locus within each colony, as well as all colonies and all loci together were evaluated in GenAlEx [47]. Allele frequencies, allelic richness, observed and expected heterozygosity were estimated for each colony using GenAlEx. Differences in the distribution of genetic variation among colonies were analyzed by R ST between colonies using ARLEQUIN v.3.1 [48]. Possible correlations between the indices of genetic differentiation and geographic distances between all pairs of colonies were tested using the Mantel test, which was performed with Mantel Non-parametric Test Calculator v.2.0 [49] using 10,000 randomizations. The distances between the pairwise colonies (islands) were calculated from geographic coordinates by the program GPS coordinate (https://gps-coordinates.org/distance-between-coordinates.php) considering a Euclidean distance.
Bayesian clustering analyses was performed to estimate the optimal number of genetic clusters (K) using STRUCTURE v2.3.3 [50]. To test for population genetic structure without prior knowledge of sampling locations, we estimated the posterior probability of the data fitting the hypothesis of K clusters [Pr(X|K)], where K is the number of putative populations. STRUC-TURE was run using the admixture and noadmixture model, without localities as priors and assuming uncorrelated allelic frequencies. Preliminary runs, testing from K = 1 to K = 10, were repeated 10 times, each run had 100,000 cycles of burn-in and 10,000 cycles of MCMC. STRUCTURE HARVESTER was used to infer the simplest model to genetic population structure performing Evanno's method (ΔK) [51]. Also, a Discriminant Analysis of Principal Components (DAPC) was carried out to determine the number of cluster of genetically related individuals, with a non-Bayesian approach. DAPC uses sequential K-means and model selection to identify genetic cluster [52]. For this, adegenet package in R [53] was used, retaining all principal components.
Assignment of each individual to their reference putative population was evaluated for microsatellite data using GeneClass [54], employing the likelihood method based on allele frequencies [55], as well as a Bayesian approach [56]. The probability that each individual was assigned to a candidate population was estimated using a Monte Carlo resampling method [57] (number of simulated individuals = 10,000; type I error = 0.01, applying rejection threshold of 0.05). The same program was used to detect first generation migrants employing the Bayesian criterion [56] applying the Monte Carlo resampling method [57] with 10.000 simulated individuals and an alpha of 0.01. We used the highest likelihood value among all available populations (L = L_home/L_max). Philopatric rate and migrate rate difference between sexes were tested by Mann-Whitney non-parametric test using Bioestat Software [58]. Gene flow between population also were estimated through coalescence-based maximum likelihood (LMAX) method implemented in MIGRATE-n 3.2.6 [59], considering geographical distance. MIGRATE assumes an n-island model at mutation migration-drift equilibrium with values of M and θ constant over time. The Brownian motion model was used as an approximation of the stepwise mutation model, and 10 following initial trials, search criteria for the MCMC sampler were set to 20 short chains of 20 000 steps and 3 long chains of 200 000 steps.
Deviations from mutation/drift equilibrium were tested with the program BOTTLENECK [60,61]. Three models of microsatellite evolution were tested: infinite allele model (IAM), two-phase model (TPM), and the stepwise mutation model (SMM). The TPM is the most realistic model for microsatellite mutation because it assumes mainly stepwise mutations with some multi-step mutations [62]. Parameters for the TPM were set at 90% single step mutations, as suggested for microsatellite data [60,61].
Mitochondrial DNA and nuclear DNA sequences
The mtDNA Control region was amplified with primers D-loop C and D [63]. The PCR reaction (10 μL) contained 20 ng of DNA, 1x of Taq buffer, 200 μM of each dNTP, 1.0 μM of each primer, and 0.5 U of Taq polymerase. Amplifications were performed with an initial step at 94˚C for 2 min and 35 cycles of 30 s at 94˚C, 40 s at 62˚C and 90 s at 72˚C, followed by final extension of 10 min at 72˚C. PCR products were cleaned by precipitation using 20% Polyethylene Glycol with 2.5M NaCl. Sequences were obtained on an ABI 3500.
RAG-1 (Recombination activating gene 1) was amplified with primers RAG17 and RAG22 [64]. The PCR reaction as prepared for 10 μL as before (D-loop) Amplifications were performed with an initial step at 94˚C for 2 min and 35 cycles of 30 s at 94˚C, 40 s at 59˚C and 90 s at 72˚C, followed by final extension of 10 min at 72˚C. PCR products were cleaned and sequenced as before.
Sequences were visualized using ChromasLite 2.1 (www.technelysium.com.au). The alignments were adjusted by manually in Bioedit v.5.06 [65]. A Bayesian approach run with the program PHASE [66] was used to identify haplotypes of heterozygotes in the nuclear intron: this program reconstructs the haplotype as implemented in DNAsp v.5.10.01 software [67].
Descriptive analyses, including haplotype diversity (h), nucleotide diversity (π), and theta (θ), were done in DNAsp v.5.10.01 [67]. We used the Network software version 4.6 (www. fluxus-technology.com) with median joining method [68] to draw relationships among haplotypes. Additionally, we calculated Tajima's [69] and Fu's [70] statistics to test bias from neutrality in DNAsp v.5.10.01 [67]. We selected these statistical tests due to their power to detect population expansion scenarios in specific sampling conditions and with specified population expansion rate, time since the expansion, sample size and number of segregating sites [71].
We used AIC as implemented in the software JModeltest [72], to select the best-fit evolutionary model. The evolutionary model selected for control region was T92+G with a discrete gamma distribution (α = 0.06), and JC for RAG1 region. These evolutionary models were used in Bayesian Skyline Plots to analyze population size dynamics through time [73], implemented in BEAST 1.6.1 [74]. We performed runs of 200,000,000 steps, logged every 5,000 steps, and burn-in of 20,000,000. For BEAST analysis, we considered the mutation rate of 0.86 substitution/site/myrs to D-loop (HVRI) described for the Adélie penguin [75] and 1.9 X 10 −3 substitution/site/myrs to RAG1 [76]. To evaluate the convergence of parameters between runs and the performance of analysis (ESS values > 200), we used TRACER 1.7.5 (http://beast.bio.ed.ac. uk/Tracer) [74]. To check the level of population genetic structure among localities, we performed an analysis of molecular variance (AMOVA) with two hierarchical levels using ARLE-QUIN 3.5 [46].
Genetic diversity
The Humboldt penguin showed high genetic diversity for all markers in all colonies. For microsatellites, the number of alleles found in each locus ranged from eight (locus Sh2Ca58) to 23 (both Sh2Ca40), averaging 15.89 over all loci (S1 Table). Private and rare alleles were found in almost all breeding colonies, except for Pupuya and Isla Grande de Atacama (frequencies of 0.004 to 0.056).
The analyses performed by Microchecker showed evidence for null alleles at locus M1-11 in seven breeding colonies (Algarrobo, Cachagua, Tilgo, Pajaros, Choros, Chañaral and Punta San Juan) and out of Hardy-Weinberg Equilibrium (HWE), thus it was excluded from population analysis. Loci Sh2Ca58, Sh2Ca12 and Sh2Ca9 were also excluded from population analysis due genotype proportions deviating from H-W expectations (S2 Table). Thus, the population analysis was conducted with 10 microsatellite loci. Humboldt penguin breeding colonies through the Pacific coast from South America showed relatively high levels of genetic diversity, with mean heterozygosity 0.72 ± 0.014 (Table 1). The minimum value observed was from Algarrobo Island (H o = 0.66) and the maximum from Tilgo (H o = 0.76).
Analysis of the 401 bp fragment of the D-loop HVRI mtDNA from 175 individual Humboldt penguins showed a total of 37 haplotypes, high haplotype diversity (H d = 0.906) and high nucleotide diversity (π = 0.008). A RAG1 fragment of 876 bp from 54 individuals showed 23 haplotypes, also high haplotype diversity (H d = 0.876) and nucleotide diversity (π = 0.002). These patterns of high genetic diversity were also observed in all colonies analyzed (Table 1).
Population genetics structure
Genetic variability using AMOVA was found mainly within populations rather than among populations. For microsatellite loci, 96.89% of genetic variability was detected within populations and only 3.11% among populations (p < 0.001), compared to 98.36% within and only 1.64% among populations for mtDNA (p = 0.10), and 78.51% within and 21.49% among populations for RAG1 (p < 0.001).
Bayesian structure analyses of microsatellite data suggested that the existing global population is composed of 3 groups (K = 3) of Humboldt penguins: 1) Punta San Juan, Peru, 2) Pan de Azucar and Isla Grande de Atacama and 3) the remaining locations (Fig 2, S4 Fig and S8 Table). Despite the large geographical distance, Pupuya from the central coast was grouped with the locations from the north (i.e. Pan de Azucar and Isla Grande). However, this is probably due the low sample size from this location. In addition, DAPC estimated the optimal number of cluster to K = 2, being one included Pan de Azucar and Isla Grande de Atacama and another included all remaining locations, despite into second group Punta San Juan was slight differentiation (Fig 3). No isolation by distance was identified between all locations, with absence of correlations between geographic and genetic distance (Mantel test; r = 0.04, t = 0.17, p = 0.57). Population genetic structure with significant R ST values for microsatellite were observed between the majority of pairwise locations, mainly between the groups detected by STRUC-TURE, with significant R ST values ranging between Punta San Juan and the remaining locations of 0.011 to 0.088, or among Pan de Azucar and Isla Grande and remaining locations of 0.001 to 0.158; and within the third group higher values were found between Algarrobo and the remaining locations (0.059 to 0.158), except Pupuya and Chiloé (Fig 4, S3 Table).
RAG1 corroborated with high population structure based on significant ϕ ST for 15 of 36 pairwise comparisons, mainly among Punta San Juan and Pan de Azucar, Chañaral, Choros, Cachagua and Algarrobo; and among Pan de Azucar and Chañaral, Choros, Cachagua and Algarrobo (S4 Table). Cachagua is significantly distinct of almost all colonies, except Tilgo. On the other hand, D-loop region was non-informative, since only one pairwise comparison showed a significant value (S4 Table). In addition, the assignment test indicated that philopatry of the Humboldt penguin is higher than 86% (Table 2).
Historical versus recent dispersal
Inference of recent migration indicated low, asymmetric and bidirectional gene flow among Humboldt penguin colonies (Table 2). In contrast, historical gene flow was observed among all colonies (Table 3). In total 368 Humboldt penguins were sexed, where 190 were males and 178 were females (S5 Table), with no significant bias from expected 1 male:1 female proportion. However, females showed lower philopatry than males (S6 Table). Table). Furthermore, there is evidence of recent expansion of the Humboldt penguin in Punta San Juan (Fs = -5.87, p = 0.02) and Pan de Azucar (Fs = -9.31, p = 0.01; D = 1.63, p = 0.03) only with D-loop region ( Table 1). Corroborating this expansion is the network shaped as a star topology, with few haplotypes in high frequency and several in low frequency with few mutations among them ( Fig 5). Skyline plots showed coalescence around 145,000 years ago to RAG1 and 25,000 years to D-loop region (S2 Fig).
Discussion
Our study reveals that the Humboldt penguin exhibits a clear population genetic structure along the Pacific coast of the South America, as observed in other previous studies on Humboldt penguins [22]. However, our outcome did not corroborate isolation by distance pattern [22], probably due to a gap of sampling. The present study used of several markers, a higher sample size, the distribution and the number of breeding colonies sampled throughout the whole range, and the new methods of data analysis such as the Bayesian methods applied in this study. The combination of all these appeared to overcome the effects of large population size and pulses of migration related to climate oscillations (e.g. ENSO) in the Humboldt penguin, which frequently limit the power of detection of population genetic structure. Bayesian genetic structure analyses revealed three genetic clusters in the Humboldt penguin: 1) Peru (Punta San Juan), 2) north Chile (region of Pan de Azucar and Isla Grande de Atacama), and 3) Central-South of Chile (Pajaros, Chañaral, Tilgo, Choros, Cachagua, Algarrobo) (Fig 2). This structure needs to be considered while implementing management and conservation action plans for the Humboldt penguin along the Southern Pacific coast. Also, taking into account the population data for Humboldt penguin (numbers of breeding pairs in each colony) that indicate Punta San Juan as a key colony in Peru, supporting around 3,160 breeding pairs [38]; and Pajaros, Chañaral, Tilgo and Choros together supporting around 21,700 Population genetic of Humboldt penguin breeding pairs (14,000 at Chañaral, 1,860 at Choros, 2,640 at Tilgo, and 1,200 at Pajaros), and Pan de Azucar with 3,000 breeding pairs. These regions need to be monitored to avoid population decline. Although some endemic seabirds from the Humboldt Current show weak population genetic structure, such as Peruvian pelicans [5]; Peruvian boobies [12], however, other marine vertebrates, such as the Marine otter (Lontra feline) exhibit higher genetic differentiation from populations from Peru compared to those distributed along Chile [77]. Despite the population genetic structure among Humboldt penguin colonies, historical gene flow among several colonies was also observed (Table 3), but recent gene flow was reduced. Gene flow among seabird colonies can be associated to colder-water upwelling [12]: these areas that retain high productivity, becoming important forage sites during ENSO years. Along the Chilean coast, several upwellings have been identified, the main sites being at Antofogasta and Mejilliones, Coquimbo, and Concepcíon [16]. Thus, Humboldt penguins travel long distances to find these highly productive regions [78] to supply their diet necessity, and search main food items such as the Peruvian anchovy (Engraulis ringens), the Araucanian herring (Strangomera bentincki), the Silverside (Odontesthes regia), the Common hake (Merluccius gayi), the Inca scad (Trachurus murphyi), the Garfish (Scomberesox saurus scombroides), and the South American pilchard (Sardinops sagax) [79]. Therefore, the genetic structure that has been observed in the present study may result of foraging in distinct upwelling regions. It is possible that individuals from Pan de Azucar and Isla Grande de Atacama forage in Mejilliones' colder-water upwellings and in Iquique, leading to reduced gene flow with the other colonies. This isolation of Pan de Azucar is corroborated by a known foraging radius of 35 km during the breeding season and 640 km to the north (near Iquique) during the winter [40], reducing gene flow with the other Chilean colonies. Around Isla Choros is also frequent region of colder-water upwellings favoring intense gene flow and reduced genetic structure among some islands, as Pajaros, Chañaral and Tilgo. Pajaros and Cachagua showed lower philopatry rates (0.89 and 0.86, respectively), indicating high gene flow with the other colonies. Thus, it is possible that individuals from Punta San Juan could be forage near Choros and Antofogasta in the North of the Chile, and breed there leading to gene flow among Peru and northern-central Chilean colonies, corroborating colder-water upwelling hypothesis. Vianna et al. (2014) proposed that changes in population size might have been the result of irrupt Humboldt penguin to favorable and productivity areas, moved from Punta San Juan to North of the Chile.
The population genetic structure can be explained by the species philopatric behavior that reduces gene flow. Philopatry was confirmed by assignment test based on microsatellites for all colonies of Humboldt penguins, showing individuals assignment the population origin above 86% (Table 2). Ecological data also indicate strong fidelity to natal colonies on Punta San Juan in Peru [80], and Cachagua and Algarrobo in Chile [40,81,82].
The evolutionary history, such as isolation, expansion or retraction of populations, affects the population genetic structure and the genetic diversity of a species. In the present study in the Humboldt penguin, the D-loop and RAG1 analyses showed a stability and recent coalescence (around 24,917 years ago to D-loop and 145,000 years ago to RAG1). Thus, it is possible that, during the glaciation when Pacific coast experienced a change in its productivity, leading to an intense reduction of the global population of the Humboldt penguin, followed by an expansion of the population after the LGM. However, the population expansion was observed only in neutrality tests to D-loop region and in network (Table 1, Fig 4). The large population size of Humboldt penguin can be masked the expansion, being important increase genomic markers to recovery demographic history. But it is not possible to discard the influence of glaciation in Humboldt penguin. Other penguin species showed the influence of climate effects on their distribution and speciation, experienced strong demographic fluctuations: during the LGM, the Gentoo penguin (Pygoscelis papua) maintained large effective population sizes in Antarctica and the Scotia Arc followed by an expansion [83,84,85], while the Adélie penguin showed two divergent lineages due to different glacial refugia in Antarctica [86,87]. The Magellanic penguin also showed a signal of expansion after the LGM, probably due to increase of available breeding sites [3].
Penguin species have demonstrated, in general, high genetic diversity, as detected here for the Humboldt penguin, and recorded for the Magellanic penguin [3,88], the Gentoo penguin [83,84,85], the Adélie penguin [84,86,89], and the Chinstrap penguin [9,84]. High genetic diversity resulted from several evolutionary factors, such as large population size, low inbreeding rate, and equal sex ratio. Genetic data showed signature of drastic reduction at Pupuya, Tilgo, Pajaros, Choros, Chañaral, Pan de Azucar and Punta San Juan, indicating a bottleneck in these colonies. Despite of these bottlenecks, the Humboldt penguin has maintained high genetic diversity in all colonies.
Considerations to conservation
Our study shows that the population structure in the Humboldt penguin can be better investigated and understood by increasing the number of markers and the sampling effort to cover the whole species' distribution. Low sampling may not reflect real allele frequencies of the global population, showing a simplification of the genetic structure. This study of the Humboldt penguins' population genetic structure revealed three major regions: 1) Punta San Juan in Peru, with clear genetic differences from the Chilean colonies, 2) Pan de Azucar and Isla Grande de Atacama in the North of Chile, which need a special attention as the most genetically differentiated colonies and 3) the breeding colonies from the Central-South of Chile.
Based on our results, following recommendations arise in relation to conservation initiatives. It is important to expand population genetic studies to cover other breeding colonies in Peru, to better understand the relationship between the population of Punta San Juan and the other two genetic groups detected in Chile. Chañaral and Choros are part of the Humboldt Penguin National Reserve, but the other islands (Tilgo and Pajaros)-should also be considered into this system to maintain genetic diversity and a more integral form of population management.
The Humboldt Penguin suffers from the impacts of several factors, such as the interaction with industrial fisheries (overexploited foraging species, incidental catch), the pressure from alien species (e.g. rats, Rattus rattus and R. norvegicus, and feral dogs) that predate on unattended eggs and chicks [90,91], human perturbations due to tourism and guano harvesters [92], and habitat loss. Furthermore, predictions of the effects of future climate change include increases in rainfall (locally) and temperature in South America [93]. It is important to have solid monitoring systems for breeding colonies that could be affected directly by these factors, especially regarding the reduction of chick survival and reproductive success. Thus, outcomes this study help to make better decisions regarding conservation actions for this species. | 7,103.6 | 2019-05-10T00:00:00.000 | [
"Biology"
] |
Band insulator to Mott insulator transition in 1T-TaS2
1T-TaS2 undergoes successive phase transitions upon cooling and eventually enters an insulating state of mysterious origin. Some consider this state to be a band insulator with interlayer stacking order, yet others attribute it to Mott physics that support a quantum spin liquid state. Here, we determine the electronic and structural properties of 1T-TaS2 using angle-resolved photoemission spectroscopy and X-Ray diffraction. At low temperatures, the 2π/2c-periodic band dispersion, along with half-integer-indexed diffraction peaks along the c axis, unambiguously indicates that the ground state of 1T-TaS2 is a band insulator with interlayer dimerization. Upon heating, however, the system undergoes a transition into a Mott insulating state, which only exists in a narrow temperature window. Our results refute the idea of searching for quantum magnetism in 1T-TaS2 only at low temperatures, and highlight the competition between on-site Coulomb repulsion and interlayer hopping as a crucial aspect for understanding the material’s electronic properties.
intermediate phase is needed.
(iv) A key difference between the applied experimental techniques is neither mentioned nor discussed in the manuscript: transport and XRD measurements are bulk sensitive, while ARPES is highly surface sensitive. So, an important question is to what extent the reported ARPES results are representative of bulk behavior. Considering the differences in transition temperatures and the absence of the 233 K transition from the reported bulk data, it may well be that the intermediate phase seen in ARPES is a surface effect, similar to the surface Mott transition that is known to occur in the sister material 1T-TaSe2. That is, the intermediate phase would explicitly not reflect "an intrinsic property of 1T-TaS2". Some minor points are the following: (v) The c lattice-parameter change occurring at 212 K should be quantified.
(vi) Heating and cooling rates for the ARPES and XRD measurements should be stated.
(vii) Experimental details of the transport measurements should be given.
In conclusion, while the combined ARPES and XRD results are in principle interesting, the experimental evidence provided is mostly indirect and there is a lack of technical rigor so that the central conclusion ("transition from band insulator to Mott insulator at intermediate temperature") is not substantiated by the data. Furthermore, the discussion and explanation of the results remains at a purely qualitative level, whereas quantitative insight is needed to make progress in better understanding this intriguing material. The manuscript in its present form cannot be recommended for publication. A major revision is needed.
Reviewer #2 (Remarks to the Author): The authors of the article "Band insulator to Mott insulator transition in 1T-TaS2" have performed resistivity, ARPES and X-ray diffraction measurement of the transition metal dichalcogenide 1T-TaS2. Their data convincingly show that a dimerization of the unit cell in the stacking direction takes place at metal insulator transition. These data confirm important results that have been already published in Ref.8 and Ref.9 of the article. In addition, the authors discovered the presence of a new phase that shows up at 217 K while heating up the sample. After reading the article, we decided to perform the experiment our self and we confirm the existence of such new phase. This article is very interesting, well written and based on reproducible, high quality data. I have no doubt that will generate the interest of broad community. On the other hand, minor comments should be taken into account in order to improve the manuscript: 1) The authors propose that the new phase is a Mott insulator. However, the resistivity shows a jump to down to the metallic value when heating above 217 K. The authors should explain this apparent incongruity.
2) It would be very valuable if the authors could estimate the magnitude of displacement related to the dimerization.
3) The authors have cited in Ref.23 a time resolved ARPES experiment on 1T-TaS2 but they overlooked to cite the first work on this subject (L. Perfetti, Physical review letters 97, 067402 (2006)). This negligence must be corrected.
Reviewer #3 (Remarks to the Author):
The authors performed temperature-dependent ARPES and XRD measurements, giving a band insulating ground state with interlayer dimerization of bulk 1T-TaS2 single crystal and existence of an intermediate Mott insulating state. This layered compound with simple crystal structure provides a great platform to investigate its complex electronic phase transitions (multi-type CDWs, superconductivity and so on) and the physical origins, from bulk crystals down to few-layer devices. Recent theoretical and STM studies on few-layer samples emphasize the importance of interlayer stacking order. I would suggest the following comments should be addressed before consideration. 1) As stated in the end of this manuscript, it is intriguing that no transport anomaly responds to the observed intermediate Mott insulating state. Will that be observable in thin sample? What is the thickness of the sample measured in this study? Since the contribution of interlayer interaction would be more significant in thin sample, I would suggest the authors to compare the current results with that on thin sample. 2) line 105: as noted, the temperature-dependent data are well reproducible in many different samples. I would suggest the authors plot all the data in comparison, at least presenting it in supplementary material as well as the differences between used samples. 3) Figure 4(a): I would suggest that the intensity (y-axis) should be better presented in log scale since the (00l) peaks are too strong. Some other peaks other than (00l) and (00l/2) should also be addressed.
4) The quality of 1T phase single crystals strongly depends on the quenching process and growing temperatures. More details of the quality of the single crystals should be addressed, such as SEM image, EDX result (usually S-deficiency exists), powder XRD and refinement and so on.
Reviewer #1 (Remarks to the Author):
The manuscript reports results of temperature-dependent electrical resistance, ARPES, and XRD measurements of the layered CDW reference material 1T- TaS2 (i) There is a significant inconsistency in the phase transition behavior observed with the three different techniques used. The measured resistance displays the known behavior: NC-C transition at 180 K upon cooling and C-T transition at 220 K upon heating. The ARPES data, by contrast, show a NC-C transition upon cooling at 193 K (Fig. 2c) and the C-I transition upon heating at 217 K (Figs. 2e and 2f), whereas this latter transition is observed in the XRD data at 212 K (Fig. 4b). These differences call into question the quality of the samples and the reproducibility of the results. The authors explicitly state that "the temperature-dependent data are well reproducible in many different samples", but they do not show the data.
We added ARPES and transport data in the supplementary material ( Supplementary Figures 1-3). The ARPES data in Supplementary Figure 1 declare the repetitiveness of our observation well. The intermediate state and the two phase transitions could be observed in all measured samples.
We summarized the phase transition temperatures determined by ARPES, resistivity, and XRD measurements in Supplementary Figure 3. There is a small variation of transition temperatures among different samples, but the temperature deviation is less than 10K. For technique reasons, we cannot measure the same sample using the transport, AREPS and XRD techniques. Therefore, it is normal that the transition temperatures measured by different techniques are different. Such transition temperature difference is under the standard deviation and can be explained by a small sample inhomogeneity. This does not invalidate the conclusions of our manuscript.
We explained this point in the revised manuscript. We would also like to point out that, in referee #2's report, referee #2 confirmed our observation after performing experiments themselves. Based on all these results, we believe our observations are solid and reproducible.
(ii) The ARPES evidence for non-negligible k_z dispersion in the low-temperature C phase is only indirect, provided by the supposedly k_z smearing-induced spectral weight shadow reflecting the projected band dispersion along the z direction (Fig. 3a,rightmost panel,taken at 175 K). What is disturbing here is that a different data normalization has apparently been applied to this ARPES intensity map compared to the one next to it taken in the intermediate "I" phase at 225 K (this can be seen in the photoemission intensity above the Fermi level). In order to convincingly establish the presence or absence of k_z dispersion in the C and I phase, respectively, photon energydependent measurements are required.
Following the referee's suggestion, we measured the band dispersions along the k z direction in both I and C-CDW phase using synchrotron-based ARPES. The data are shown in Fig.4 in the revised manuscript. The new data provide direct evidences, which are consistent with our temperature-dependent data and strengthen our conclusions.
The flat band is gapped and shows weak photon energy dependence in the intermediate phase at 225K. This indicates the two dimensionally of the flat band in the I phase. However, when entering the C-CDW phase, the data become strongly photon energy dependent. The band positions shift from -230meV to -90meV periodically. First, the measured bandwidth along k z direction is around 140 meV in the C-CDW phase, which is well consistent with the k z broadening effect observed by the helium lamp (21.2 eV). Second, the period of the k z dispersion is around 2pi/2c, which suggests the existence of interlayer dimerization. Third, the weak k z dispersion and large energy gap observed in the I phase suggest that the I phase is a Mott insulating phase where the Column repulsion play a more dominating role than the inter-layer hopping.
(iii) Although the XRD evidence for c-axis doubling in the low-temperature C phase appears to be more direct, as given by the (0, 0, 5/2) and (0, 0, 7/2) superstructure reflections in Fig. 4b, the origin of these diffraction peaks, their intensities, and the underlying CDW stacking order remain completely unclear. This potentially novel result is also not put into the context of the results of previous structural determinations of the C phase. A convincing structural analysis of the transition to the intermediate phase is needed.
The structure of the C-CDW phase can be divided into three tiers: the intra-layer CDW, inter-layer dimerization, and inter-layer stacking configuration. First, the previous STM and XRD studies show that the intra-layer CDW is a √13 × √13 starof-David structure in the C-CDW phase. This has been also confirmed by our ARPES mapping in Fig.1. Second, our XRD and photo-energy-dependent ARPES data show that adjacent layers form an inter-layer dimerization with a 2pi/2c period. Then, the only unknown structure is the interlayer stacking configuration. There are three proposed configurations, on-top stacking, alternating stacking and nonalternating stacking (PRB 98,195134, PRL 122,106404). Theoretically, it has been proposed that the energy differences between different configurations are small. Therefore, different stacking configurations could coexist, and the layers could stack randomly along the c direction. We have measured the temperature dependence of the √13 × √13 diffraction peak using XRD. The results are plotted in Supplementary Figure 5. The diffraction peak broadens remarkably below 212 K. Together with the weak intensity of the half-integer diffraction peaks, our XRD data is consistent with the theoretical prediction that the interlayer stacking shows certain randomness in the C-CDW phase. To fully determine the stacking configuration experimentally, layerresolved XRD measurement is required.
Although the stacking configuration might be randomness and is difficult to be determined experimentally. Indirect evidences could be found by comparing our ARPES results with theoretical band calculations. Band calculations show that for the on-top and nonalternating stacking configurations, the system is metallic with one band crossing the Fermi energy. This is inconsistent with our observation. Our results are more consistent with an alternating stacking configuration where the predicted band structure is a small-gap band insulator.
We rewrote the discussion part to explain this point explicitly in the revised manuscript. As mentioned by the referee, the 233K phase transition and the insulating intermediate phase have not been observed in the resistivity and XRD data. Considering that APRES is more sensitivity to the electronic state near the sample surface, we could explain this controversy using two different scenarios.
First, the I phase could be a surface state. It only exists at the surface layer and doesn't exist in the bulk layers. However, this scenario contradicts with some of our observations. For example, with 21.2 eV photons, the penetration depth of ARPES is normally around or over 2 layers. If the surface and bulk electronic states are different, we should observe two different sets of bands representing the surface and bulk respectively. However, our data only show one set of bands, which suggests that the extension of I phase along c direction is at least two TaS 2 layers. Furthermore, the first phase transition observed by ARPES is around 217K, which is well consistent with the C-CDW transition temperature determined by transport and XRD measurements. Our photon energy dependent data clearly observed the k z dispersion of bands, whose period is well consistent with the bulk lattice parameter. All these results suggest that the ARPES data reflect the bulk properties of 1T-TaS 2 .
Another scenario is that the system is inhomogeneous near the phase transition and shows phase separation among different layers. The I phase only exists in certain layers and therefore cannot be picked up by the resistivity and XRD measurements. Under this scenario, the observation of I phase by ARPES should be highly dependent on the cleaved surface. However, this is not the case. The high reproducibility of our ARPES data seems to suggest that the I phase may have a high probability of occurrence near the sample surface.
While our results undoubtedly show the existence of I phase in the few topmost layers of 1T-TaS 2 . To determine whether the I phase exists in the bulk or not, layerresolved transport and XRD measurements are required. We agree with the referee that it is an intriguing open question. But, we think this is beyond the scope of our manuscript. The main focus of our manuscript is the discovery of intermediate Mott phase and its transition to a band insulator with interlayer dimerization. We think this captures the most important physics of 1T-TaS 2 , which explains its rich phases and electronic properties. We hope our results will stimulate following researches working on this unanswered question in this intriguing material.
We rewrote the discussion part to explain this point explicitly in the revised manuscript.
Some minor points are the following: (v) The c lattice-parameter change occurring at 212 K should be quantified.
We added the c lattice-parameters in the revised manuscript. The c lattice-parameter is 5.902Å at high temperatures and 5.928Å in the C-CDW phase.
(vi) Heating and cooling rates for the ARPES and XRD measurements should be stated.
For the ARPES measurements in a heating process, the samples were cooled down until 80 K with a rapid fall of temperature (about 20 K/min) from room temperature (RT, 300 K) naturally. After adequate cooling for about 10 min at the lowest temperature, the samples were heated to 160 K nearby with a rate about 1.5 ~ 5 K/min. The samples were then heated up slowly (0.22 ~ 0.4 K/min) to witness the bands evolution near the phase transitions. ARPES data were collected d uring the heating process. For the ARPES measurement in a cooling process, The sample was cleaved at 300 K and cooled down to 210 K with a rate about 2 K/min; The sample was then cooled down to 160 K slowly with a rate about 0.5 K/min. ARPES data were collected during the cooling process. More experimental details are shown in Tab. S1.
For the XRD measurement, the sample was cooled down until 93 K with a rapid fall of temperature (about 10 K/min) from RT naturally. After adequate cooling for about 1 hour at 120 K, we heated the sample to 170 K with a rate of 3 K/min. We then heated the sample to 370 K with a rate of 0.16 K/min. The XRD data were collected during the heating process.
We added the experimental details in the revised manuscript and supplementary material.
(vii) Experimental details of the transport measurements should be given.
The transport data were measured in Physical Property Measurement System (PPMS, Quantum Design, Inc.) utilizing the standard four-probe method. The heating and cooling rates are 3K/min. We added the experimental details for the transport measurements in the revised manuscript.
In conclusion, while the combined ARPES and XRD results are in principle interesting, the experimental evidence provided is mostly indirect and there is a lack of technical rigor so that the central conclusion ("transition from band insulator to Mott insulator at intermediate temperature") is not substantiated by the data. Furthermore, the discussion and explanation of the results remains at a purely qualitative level, whereas quantitative insight is needed to make progress in better understanding this intriguing material. The manuscript in its present form cannot be recommended for publication. A major revision is needed.
We thank the referee for thinking our results interesting. We have conducted the photon energy dependent experiment following referee's suggestion. The new data provide strong and direct evidences, which strengthen our conclusions. We hope the referee will agree that our conclusion is based on high quality and reproducible data.
The main focus of our paper is the discovery of intermediate Mott phase and its transition to a band insulator with interlayer dimerization. We agree with referee that the interlayer stacking configuration and the existence of I phase in bulk layers are intriguing open questions. However, it is technically difficult to provide definite answers to these questions. We rewrote the discussion part to explain all possible scenarios explicitly in the revised manuscript. We hope our results and discussions could stimulate the following theoretical and experimental researches focusing on these unanswered questions.
Our results do provide quantitative information, such as the energy scales of t // , U, and t ⊥ . These parameters are critical for constructing a correct theoretical mode of this intriguing material. The comparison between the experimental parameters and theories support our conclusions that the observed transition is a Mott insulator to band insulator transition.
We thank the referee's comments and suggestions, which helped us improving the paper significantly. We hope the referee would found the revised manuscript satisfactory.
The authors of the article "Band insulator to Mott insulator transition in 1T-TaS2" have performed resistivity, ARPES and X-ray diffraction measurement of the transition metal dichalcogenide 1T-TaS2.
Their data convincingly show that a dimerization of the unit cell in the stacking direction takes place at metal insulator transition. These data confirm important results that have been already published in Ref.8 and Ref.9 of the article. In addition, the authors discovered the presence of a new phase that shows up at 217 K while heating up the sample. After reading the article, we decided to perform the experiment our self and we confirm the existence of such new phase.
This article is very interesting, well written and based on reproducible, high quality data. I have no doubt that will generate the interest of broad community. On the other hand, minor comments should be taken into account in order to improve the manuscript: 1) The authors propose that the new phase is a Mott insulator. However, the resistivity shows a jump to down to the metallic value when heating above 217 K. The authors should explain this apparent incongruity.
We thank the referee for his/her suggestion. We explained this point explicitly in the page 4 of our reply and also in the revised manuscript. The inconsistencies between ARPES and transport data question the existence of Mott phase in the bulk layers. While our data undoubtedly show the existence of intermediate Mott phase in the few topmost layers of 1T-TaS 2 , to determine whether the Mott phase exists in bulk layers or not, the layer-resolved transport or XRD experiments are required. We hope our results could stimulate following theoretical and experimental researches to clarify this issue.
2) It would be very valuable if the authors could estimate the magnitude of displacement related to the dimerization.
We added the c lattice-parameters in the revised manuscript. The c lattice-parameter is 5.902Å at high temperatures and 5.928Å in the C-CDW phase.
3) The authors have cited in Ref.23 a time resolved ARPES experiment on 1T-TaS2 but they overlooked to cite the first work on this subject (L. Perfetti, Physical review letters 97, 067402 (2006)). This negligence must be corrected.
We thank the referee for his/her correction. We added the related reference in the revised manuscript.
We thank the referee for his/her comments and suggestions, which helped us improve the paper significantly. We hope the referee would found the revised manuscript satisfactory. We thank the referee for his/her suggestion. We explained this point explicitly in the page 4 of our reply and also in the revised manuscript. The inconsistencies between ARPES and transport data question the existence of Mott phase in the bulk layers. While our data undoubtedly show the existence of intermediate Mott phase in the few topmost layers of 1T-TaS 2 , to determine whether the Mott phase exists in bulk layers or not, the layer-resolved transport or XRD experiments are required. The transport or XRD experiments taken on thin 1T-TaS 2 film would be very helpful.
The resistivity measurements on thin 1T-TaS 2 Films have been reported in many literatures [Sci. Rep 4, 7302 (2014), PNAS 112, 15054-15059 (2015, Sci. Adv. 1, e1500606 (2015), Nat. Nanotech. 10, 270-276 (2015)]. Despite of their inconsistencies, they share some common results. First, the transition becomes broad with the decreasing of sample thickness, which may suggest the existence of phase inhomogeneous. Second, the sharp increment of resistivity at 217K disappears in thin films, indicating a disappearance of C-CDW phase. This is consistent with our conclusion that the interlayer hopping is important for the C-CDW phase. Third, the system's ground state remains insulating in the ultrathin film whose thickness is down to 2nm. In such an ultra-thin film, the intra-layer interactions should play a more dominating role than the interlayer hopping. Therefore, the insulating phase observed in ultra-thin films could be attributed to the Mott phase observed here.
We explained this point in the revised manuscript. While the current results on thin film do not contradict our conclusions. More detailed experiments are required in order to locate the existence of Mott phase in bulk layers and characterize its properties.
2) line 105: as noted, the temperature-dependent data are well reproducible in many different samples. I would suggest the authors plot all the data in comparison, at least presenting it in supplementary material as well as the differences between used samples We added ARPES and transport data in the supplementary material ( Supplementary Figures 1-3). The ARPES data in Supplementary Figure 1 declare the repetitiveness of our observation well. The intermediate state and the two phase transitions could be observed in all measured samples.
3) Figure 4(a): I would suggest that the intensity (y-axis) should be better presented in log scale since the (00l) peaks are too strong. Some other peaks other than (00l) and (00l/2) should also be addressed.
We plot the data in linear scale to better illustrate the intensity difference between the (0, 0, L/2) peaks. We plotted the data in log-scale in Supplementary Figure 4 in supplementary material. All the other peaks come from the polycrystalline substrate and are temperature independent.
4) The quality of 1T phase single crystals strongly depends on the quenching process and growing temperatures. More details of the quality of the single crystals should be addressed, such as SEM image, EDX result (usually S-deficiency exists), powder XRD and refinement and so on.
We added the experimental details of our sample growth in the revised manuscript. To prove the high quality of our sample, we added the sample photo, power XRD and rocking curve data in Supplementary Figure 6 in the supplementary material. Firstly, the sample is large and shows mirror-like naturally cleaved surface. Secondly, the diffraction peaks are narrow. The FWHMs are 0.0185 o and 0.0149 o in the rocking curve and two-theta scan respectively. Thirdly, the power XRD data are consistent with the reported data showing a pure 1T phase of TaS 2 . The refinement confirmed the phase purity and the trigonal crystal structure of 1T-TaS 2 with the P-3m1 space group with lattice parameters a = b = 3.366 Å, c = 5.898 Å (Supplementary Figure 6). Fourthly, our resistivity data is highly reproducible with a standard deviation of transition temperature less than 10K. Our resistivity data are well consistent with the data reported in many literatures. All these results indicate a high quality of our sample. | 5,745.6 | 2020-08-24T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Quantum Light and Coherent States in Conducting Media
We present a simple description of classical and quantum light propagating through homogeneous conducting linear media. With the choice of Coulomb gauge, we demonstrate that this description can be performed in terms of a damped harmonic oscillator which is governed by the Caldirola-Kanai Hamiltonian. By using the dynamical invariant method and the Fock states representation we solve the time-dependent Schrödinger equation associated with this Hamiltonian and write its solutions in terms of a special solution of the Milne-Pinney equation. We also construct coherent states for the quantized light and show that they are equivalent to the well-known squeezed states. Finally, we evaluate some important properties of the quantized light such as expectation values of the amplitude and momentum of each mode, their variances and the respective uncertainty principle.
Introduction
For a long time, the old and fascinating problem (from classical and quantum viewpoint) of the interaction of light with matter has received considerable attention of physicists. The story of the solution of this problem is a familiar one. Further, the solution of this problem has been of crucial importance for the development of our understanding of nature.
In order to obtain the basic concepts to study the classical and quantum behavior of light we must take into account Maxwell's equations. In the quantum case, the quantization of these equations is traditionally performed in free space or in empty cavities by associating a time-independent mechanical oscillator Now, in order to obtain a solution of this equation we consider light waves in a certain volume of space. So, by the familiar procedure of separation of variables, we write the vector potential in terms of the mode is the velocity of light in the medium.
In the follows let us discuss the solutions of Equations (8) and (9). The solution of Equation (9) can be written in the form where l A and l δ are constants to be determined by the initial conditions and l Ω is given by [33]. Hence, the total Hamiltonian of the electromagnetic field is a sum of individual Hamiltonians corresponding to each mode, that is, In the following discussion we focus our attention on the solution of Equation (8). Considering that the electromagnetic field is contained in a certain cubic volume V of side L of nonrefracting media, the mode functions are required to satisfy the transversality condition and to form a complete orthonormal set. Furthermore, assuming periodic boundary conditions on the surface, the mode function
,t A r
(see Equation (7)) by using Equations (10) and (13). Hence, using Equations (7) and (13) we can write, for each mode l, the electric and magnetic fields (see Equation (5) where we have used that . Therefore, the above results give us a complete classical description of the propagation of light in conducting linear media since the electric E and magnetic B fields are completely specified. Here it is worth noticing that in the previous description we have associated a damped harmonic oscillator to the each mode of the electromagnetic field. Let us also observe that in the absence of the dissipation, that is, 0 σ = the Hamiltonian (12) reduces to that of the standard harmonic oscillator with the permittivity playing the role of the mass of the mechanical oscillator. As a consequence, all of our previous results coincide with those of the propagation of light in empty cavities.
Quantum Light Propagation in Conducting Media
In order to obtain a quantum description of light propagating in a conducting linear media we need to quantize the electromagnetic field. Now as the spatial mode functions ( ) l u r are completely determined, the amplitude of each nor-Journal of Applied Mathematics and Physics mal mode in Equation (7) needed to specify a particular field configuration is ( ) l q t [1]. Thus, for each canonical operator l q the electric E and magnetic B fields operators may be derived from the potential vector A by using Equation (5). So, let us move our attention to the canonical operator ( ) l q t in order to obtain the vector potential. For this purpose, let us solve the Schrödinger equation associated with the Hamiltonian (12) , , , where the coordinate ( ) l q t and the momentum l p are now canonically con- We can obtain the solutions of this equation with the aid of the dynamical invariant method developed by Lewis and Riesenfeld [23]. According to this method, we must look for a nontrivial Hermitian operator ( ) l I t which satisfies the equa- Then, the solutions of the Schrödinger Equation (16) In what follows, let us consider a quadratic invariant that satisfies Equation (17). Here, we assume an invariant in the form We must now find the eigenstates of the invariant ( ) l I t . To this end, we will use the Fock representation since, as is well-known, the quantum behavior of some quantum systems, in particular quantum harmonic oscillator-type systems, is most obvious in Fock ststes, which are states with specific numbers of energy quanta. Then, let us introduce annihilation and creation-type operators ( ) l a t and ( ) † l a t defined by [16] [23] ( ) In terms of these operators, the invariant (21) can be factored as From Equations (26) and (27) (24) and (25). From the expressions of these operators, we obtain that Thus, using Equations (7), (13), (23), (33) and (36) In the above expression we have written the annihilation and creation operators The above field operators describe the quantum propagation of light in conducting linear media. We also see that both electric and magnetic fields decrease exponentially in time due the conductivity of the medium proportionally to ( ) Further, in the absence of the dissipation, that is, 0 σ = these fields reduce to that in empty cavities [1].
In what follows, we use the Fock states to calculate the expectation values of the amplitude l q , momentum l p , their variances and the respective uncertainty principle. Hence, making use of Equations (30) and (31) and after a little of algebra, we find that The quantum variances are given by ( ) By using the above expressions we obtain the uncertainty principle as media. We have used the Coulomb gauge and considered light waves confined in a cubical volume of side L filled with a conductive medium as well as light propagating under periodic boundary conditions. We have demonstrated that this propagation can be performed by associating a damped mechanical oscillator with each mode of the electromagnetic field. As a consequence, we have established a unification of the procedure to obtain the classical and quantum propagation of light in empty cavities (or free space) and cavities filled with a material medium. In the former case, it is usually performed by associating an ordinary harmonic oscillator with each mode of the electromagnetic field, and in the latter one it can be performed by the association of a damped harmonic oscillator. Further, using the invariant method, appropriated annihilation and creation-type operators and the Fock states we have easily solved the time-dependent Schrödinger for our problem and write its solutions in terms of a special solution of the Milne-Pinney equation. We have also constructed coherent states for the quantized light and have calculated the quantum variances of the amplitude ( ) l q t and momentum ( ) l p t as well as the uncertainty principle for each mode of the electromagnetic field in both states, namely, Fock and coherent states. We have seen that the uncertainty product in the coherent states is equal to the minimum value of that of the number states. In addition, we have seen that the uncertainty principle in the coherent states, in general, does not attain its minimum value. By employing a direct procedure we have shown that this latter result occurs because the coherent states constructed previously correspond to the squeezed states. Finally, we expected that the simple procedure developed in this work can be helpful to investigate subjects related to the interaction of light with material media. | 1,908.2 | 2020-11-10T00:00:00.000 | [
"Physics"
] |
Acute effects of unilateral sectioning the superior ovarian nerve of rats with unilateral ovariectomy on ovarian hormones (progesterone, testosterone and estradiol) levels vary during the estrous cycle
The present study analyzed the participation of the left and right superior ovarian nerves (SON) in regulating progesterone, testosterone, and estradiol serum levels in unilaterally ovariectomized rats on each day of the estrous cycle. For this purpose, ovarian hormone concentrations in serum were measured in animals with either sham-surgery, unilateral ovariectomy (ULO), unilateral sectioning of the SON, or sectioning of the SON innervation of the in situ ovary in rats with ULO. This investigation results show that the right and left ovaries have different capacities to maintain normal hormone levels, that such capacity varies during the estrous cycle, and that it depends on the integrity of the SON innervation. In rats with only one ovary, the effects of ovarian denervation on hormone levels varied according to which ovary remained in situ, the specific hormone, and the day of the estrous cycle when treatment was performed. Present results support the idea that the ovaries send and receive neural information that is processed in the central nervous system and we propose that this information participates in controlling the secretion of gonadotropins related to the regulation of ovarian functions.
Background
Asymmetry in the ovaries' morphology, physiology, and regulatory structures is well established. Evidence suggesting that these asymmetries play an important functional role in regulating gonadal functions, and that the degree of asymmetry between gonads fluctuates along the estrous cycle, has been published [1]. The ovarian innervations play a role in regulating the ovulation process [2][3][4][5], in hormone secretion [6,7], and function as neural pathways that participate in modulating hypothalamic and non-hypothalamic centres that regulate the secretion of gonadotropins [1]. Furthermore, it has been proposed that the ovarian innervations modulate the reactivity of different ovarian compartments to gonadotropins effects [1,8].
According to Uchida et al [20], neural reflexes from the abdominal skin to the ovaries affect ovarian blood flow and the activity of the SON. The response level depended on whether the left or right abdominal afferent was stimulated, since stimulating the left abdomen produced a much stronger effect on the activity of the left ovarian sympathetic nerve than stimulating the right abdomen. The response of ovarian blood flow to abdominal stimulation is mediated as a reflex response via the ovarian sympathetic nerves, and the response is controlled via supra-spinal pathways and depends on the estrous cycle [21].
Niswender et al. [22] suggest that there are evidence indicating that ovarian blood flow is an important factor regulating the activity of gonadotropic hormones at the luteal cell level, and a secondary mechanism of action of LH may be to increase blood flow to the corpus luteum.
Ovarian and uterine arteries with anastomosis between them, provide arterial blood supply to the ovaries. Blood flow to the ovaries varies in magnitude and distribution throughout the estrous cycle [23][24][25], and the number and distribution of the follicular and luteal capillaries changes throughout the estrous cycle [26].
Most neurones originating from the SON fibers are located in the complex celiac-mesenteric ganglia (CSMG). The SON carries most of the catecholaminergic fibers innervating endocrine ovarian cells, which are distributed in the peri-follicular theca layer and are closely related to the theca internal cells [9,27]. In prepubertal rats, 24 and 72 hrs after unilateral or bilateral sectioning of the SON, the NA levels in the denervated ovary were lower than in untouched (control) and laparotomized animals [28].
Aside from the catecholaminergic innervation, the SON provides vasoactive intestinal peptide (VIP) [29] and nitric oxide (NO) [30] innervations to the ovaries. NO inhibits cytochrome P450 aromatase activity and the secretion of estradiol (E2) by granulosa cells in culture [31]. In vitro studies show that in the rat, the participation of neurotransmitters regulating the secretion of ovarian progesterone (P4) varies along the day of the estrous cycle. In diestrus-1 (D1), neuropeptide Y (NPY), NA and VIP inhibit P4 secretion by the ovaries, while on diestrus-2 (D2) these neurotransmitters stimulate P4 secretion. On D1 and D2, the effects of NA + VIP or NA + NPY on P4 secretion were higher than VIP or NPY alone [23]. In the rat, ovary denervation reduces the synthesis and secretion of P4 by inhibiting 3-betaHSD activity [32]. In the pig, sectioning of the plexus nerve and the SON led to lower plasma levels of LH, P4, androstenedione (A4), testosterone (T), estrone and estradiol-17beta. Further, a significant increase in the immuno-expression of cholesterol side-chain cleavage cytochrome P450 in follicles, as well as a decrease of 3-betaHSD, and in plasma levels of luteinizing hormone (LH), P4, A4, T, estrone and estrogen have been documented [33].
Unilateral ovariectomy (ULO) is a useful tool for studying the mechanisms involved in the asymmetric responses of the ovaries to neuroendocrine regulating signals [34][35][36][37]. The difference between the right and left ovaries' capacity to release oocytes seems to be related to the type and degree of the innervations in each gonad [1]. According to Klein and Burden [10], the number of neural fibers received by the right ovary is higher than in the left; while, Toth et al. [38] showed that the left ovary sends more neural information to the central nervous system (CNS) than the right ovary. In addition, the right and left ovaries show different ovulatory responses to surgical denervation, and these responses vary according to the day of the estrous cycle when surgery is performed [3,39].
Ovarian denervation by sectioning the vagus nerve has different effects on normal cyclic rats and ULO rats. In normal cyclic rats sectioning the left vagus nerve resulted in lower ovulation rate than in sham operated animals, while sectioning the right vagus nerve did not modify the ovulation rate. Sectioning the right or left vagus nerves to right-ULO rats (left ovary in-situ) reduces compensatory ovarian hypertrophy. In turn, sectioning the left vagus nerve induced different effects depending on which ovary remained in-situ. Left-side vagotomy performed to right ULO rats (left ovary in-situ) resulted in higher ovulation rates, compensatory ovarian hypertrophy, and number of ova shed; while the same procedure to left ULO rats (right ovary in-situ) resulted in a decrease of the same parameters [2,3]. In rats, the electrical stimulation to the ovarian plexus nerve or the SON produces a vasoconstriction of ovarian arterioles and a reduction of ovarian blood flow in rats [33]. The stimulation of the SON resulted in a significantly decrease of E2, while electrical stimulation of the ovarian plexus nerve did not modify it. This suggests that autonomic nerves that reach the ovary via the SON have an inhibitory role in the secretion of ovarian E2 [40].
Sensorial innervations also play a role in regulating ovarian functions. Sensorial denervation induced by capsaicin injection, systemic or into the ovarian bursa, diminished spontaneous ovulation and secretion of P4 and E2 [4]. Capsaicin treatment to ULO rats affect ovulation and the secretion of ovarian steroids depending on which ovary remained in situ and the day of the cycle when treatment was performed [41,42].
By comparing hormone levels in untouched (control) and ULO rats, this investigation studied the participation of the SON innervation in regulating hormone secretion by the left and right ovaries. The following hypotheses were assessed: 1) Since the innervations arising from the ovaries carry neural signals to the CNS, then, extirpating one ovary will produce acute changes in the neuroendocrine mechanisms regulating hormone secretion by the in situ ovary, and the type and magnitude of these changes would depend on which ovary (left or right) remains in situ as well as on which day of the estrous cycle surgery is performed.
2) Because the participation degree of the ovarian innervations on the modulation of hormone secretion seems to depend on the day of the estrous cycle, then, the acute effects of unilaterally sectioning the SON on P4, T and E2 serum levels will depend on the day of the cycle when denervation is performed.
3) Since after ULO the CNS no longer receives the neural information arising from the extirpated ovary, then, denervating the in situ ovary of animals with ULO, by sectioning the SON, will result in different hormone secretion changes than those resulting from sectioning the SON of animals with both ovaries in situ.
4) Since the neural regulation of ovarian functions seems to be asymmetric and to vary along the estrous cycle, then the changes in P4, T and E2 levels observed in animals with ULO will depend on the ovary remaining in situ and the day of the estrous cycle when ULO surgery, is performed. 5) Since acute bilateral ovariectomy affects ovarian steroid serum levels in different ways, then the effects of ULO, SON sectioning, and ULO + SON sectioning will differ according to the manipulated organ and the hormone studied.
Methods
For this investigation, virgin adult female rats (195-225g body weight) of the CIIZ-V strain from our own stock were used. The experiments were performed following the guidelines established by The Mexican Law of Animal Protection Guidelines Treatment. The Committee of the Facultad de Estudios Superiores Zaragoza approved the experimental protocols.
Animals were kept under controlled lighting conditions (lights on from 05:00 to 19:00 h), with free access to food (Purina S.A., Mexico) and tap water. Estrous cycles were monitored by daily vaginal smears; only rats showing at least two consecutive 4-day cycles were used in the experiment.
Rats were randomly allotted to one of the five experimental groups described below. Animals from different experimental groups were treated simultaneously and sacrificed one hour after surgery (14.00-14.15 h). All surgeries were performed in rats under ether anesthesia, using a ventral approach 13.00-13.15 hrs on each day of the estrous cycle. The animals woke up immediately after surgery.
Experimental groups
The number in parenthesis indicates the number of animals in each group.
Unilateral section of the SON to ULO animals
The right [D1 (9), D2 (9), P (10) and E (10)] or left ovary [D1 (10), D2 (10), P (10) and E (9)] was extirpated and the SON of the in situ ovary was sectioned immediately after. The wound was subsequently sealed. Figure 1 shows a summary of the treatments
Autopsy procedures
Rats were sacrificed by decapitation; the blood of the trunk was collected, allowed to clot at room temperature for 30 minutes, and centrifuged at 3,000 rpm during 15 minutes. Serum was stored at -20°C, until P4, T and E2 concentrations were measured.
Hormone assay
Concentrations of P4, T, and E2 in serum were measured using Radio-Immuno-Assay (RIA); with kits purchased from Diagnostic Products (Los Angeles, CA (T and E2). The intra-and inter-assay percent variation coefficients for P4 were 5.3 and 9.87; for T 5.6 and 8.7 and for E2 6.9 and 10.8, respectively. The detection limits of: P4 0.05 ng/ml to 40 ng/ml; correlation coefficient 0.9991; T 0.0020 ng/ml to 8.0 ng/ml, correlation coefficient 0.9851; E2 0.2680 pg/ml to 900.00 pg/ml; correlation coefficient 0.9960.
Statistics
Data on hormonal concentrations in serum were analyzed using multivariate analysis of variance (MAN-OVA), followed by Turkey's test. Differences in serum hormone concentrations between two groups were analyzed using the Student's t-test. A probability value of less than 5% was considered significant.
Results
Effects of sham-surgery (Table 1) Compared to the control group, sham surgery on D1, D2 and P resulted in higher P4 concentrations, while sham surgery performed on D2, P or E resulted in higher T concentrations. No changes in E2 serum concentrations were observed. Based on these results, the effects of experimental surgeries were compared to their corresponding sham surgery group.
Effects of ULO (Table 2) Effects on P4 serum levels Compared to sham-surgery animals, left ULO (right ovary in-situ) did not modify P4 serum levels; regardless of the day treatment was performed. Right ULO (left ovary in-situ) performed on D1 resulted in lower P4 levels, while the same treatment performed on E resulted in higher P4 concentrations.
Effects on T serum levels
Compared to sham-surgery animals, left ULO (right ovary in-situ) performed on P or E resulted in lower T. Compared to sham-surgery animals, right ULO (left ovary in-situ) did not modify hormone serum levels, regardless of the day of the estrous cycle when surgery was performed. Right ULO (left ovary in-situ) on E resulted in higher T levels than in animals with left ULO (right ovary in situ).
Effects on E serum levels
Left ULO (right ovary in-situ) on D1 as well as right ULO (left ovary in-situ) on P resulted in lower E2 serum levels. For animals treated on P, right ULO (left ovary in situ) resulted in lower E2 concentrations than in rats with left ULO (right ovary in situ) treatment. Figure 2 shows a summary of the effects of ULO on P4, T and E serum levels Effects of Sectioning the SON of rats with both ovaries in situ (Table 3) Depending on the day of treatment, sectioning the SON of rats with both ovaries in situ had different effects on the concentrations of ovarian hormones in serum.
Effects on P4 serum levels
Compared to sham-operated animals, sectioning the right SON on D1 or E resulted in higher P4 levels. In rats treated on D2, sectioning the right SON resulted in higher P4 levels than sectioning the left SON; while sectioning the left SON on E resulted in higher P4 levels than sectioning the right SON and in the sham-surgery group.
Effects on T serum levels
Sectioning the left SON on D1 resulted in higher T levels than sham-surgery or sectioning the right SON treatment. In turn, sectioning the left SON on D2 resulted in lower T levels than sham-surgery or sectioning the right SON treatment, while sectioning the right SON on P resulted in lower T levels than sham-surgery or sectioning the left SON treatment.
Regardless of the day surgery was performed, sectioning the left or right SON did not modify E2 concentrations in serum. Figure 3 shows a summary of the effects of sectioning the SON on P4, T and E serum levels Figure 4A). In rats with right ULO (left ovary in situ) treatment, sectioning the left SON on P resulted in higher P4 levels ( Figure 4B).
Effects on T serum levels
In rats with left ULO (right ovary in situ) treatment, sectioning the right SON on D1 resulted in lower T levels than in ULO treated rats; while the same treatment performed on E resulted in higher T levels ( Figure 5A). On D1 or P, sectioning the left SON to ULO rats resulted in lower T levels than in rats with the left ovary in situ, while the same treatment performed on D2 resulted in higher T levels than in ULO rats ( Figure 5B).
Effects on E2 serum levels
Sectioning the right SON of rats with left-ULO (right ovary in situ) on D1 or D2 resulted in higher E2 levels; while the same treatment on P resulted in lower E2 levels ( Figure 6A). On the other, sectioning the left SON of rats with right-ULO (left ovary in situ) on D1 or D2 resulted in higher E2 levels than in rats with ULO ( Figure 6B).
Comparative effects of unilaterally sectioning the SON of rats with both ovaries and ULO rats (figure 7)
Effects on P4 serum levels In rats treated on D1 or E, P4 levels were higher in rats with ULO + sectioning the right SON than in rats with both ovaries and sectioning of the right SON treatment. Rats treated on D2 had an inverse response.
Effects on T serum levels
In rats treated with ULO + sectioning the SON on D1, T levels were lower than in rats with both ovaries and sectioningof the SON (right or left) treatment. An inverse result occurred in rats treated on D2. Rats treated with ULO + sectioning the left SON on P had lower T levels than rats with both ovaries and sectioning of the left SON. In rats treated on E, rats with ULO + sectioning the right SON had lower T levels than rats with both ovaries and section of the right SON.
Effects on E2 serum levels
Rats with ULO + sectioning the right ovary treatment on D2, had higher E2 levels than rats with both ovaries and section of the right or left SON. Rats with ULO + sectioning the SON on P had lower E2 levels than rats with both ovaries and section of the right or left SON. Figure 8 shows a summary of the effects of sectioning the SON to ULO rats on P4, T and E serum levels.
Results were compared to the respective ULO treatment group.
Discussion
The results presented herein support the hypotheses that secretion of ovarian steroid hormones is asymmetric and depend on the neural information arriving to the ovaries through the SON. The results also support the hypothesis that the secretion of steroid hormones levels varies through the estrous cycle. The results suggest that the acute extirpation of one ovary modifies the mechanisms regulating hormone secretion and that these modifications depend on the extirpated ovary and the day of the cycle when surgery is performed.
Kawakami et al., [43,44] showed that electric stimulation of the ventromedial hypothalamus and of the medio-basal prechiasmathic area in hypophysectomized and adrenalectomized rats provoked the release of P4 and E2 with no modifications in the levels of gonadotropins, or ovarian blood flow, and GnRH [45] suggesting a direct neural control of the ovarian steroidogenesis. In the pre-pubertal rat the differences on P4, T and E2 levels induced by right-or left-ULO did not correlate with changes in FSH or LH concentrations, suggesting that the acute effects of unilateral ovariectomy on P4, T and E2 secretion by the ovaries does not depend on gonadotropin signals [46].
Noradrenaline and vasoactive intestinal peptide (VIP) stimulate the ovarian release of P4, while GnRH and gamma aminobutyric acid (GABA) play an inhibitory role. Some of these neurotransmitters are also present in the SON and the coeliac ganglion [39]. In vitro studies by Garraza et al. [29] show that NPY, VIP or SP applied directly on the ovaries obtained from rats on D1 inhibit the secretion of P4, while the same treatment on ovaries from rats on D2 stimulates P4 secretion. The participation of the ovaries and adrenals in maintaining normal P4 levels vary during the estrous cycle. There is evidence indicating the ovaries release more P4 on D1 than the adrenals; while on D2, P and E, the main source of P4 are the adrenals [47]. In the present study, by comparison with untouched control rats, the increase of P4 levels in sham-surgery rats treated on D1 or D2 was higher than those observed in ether-anaesthetised rats (164% vs. 20.6%; 237% vs. 66.2%) [34,35] suggesting that the abdominal skin stimulation play a stimulatory role on P4 release during D1 and D2. A similar effect was observed with T levels in sham-surgery rats treated on D2 or E. The neural connections between the abdominal skin and the ovaries proposed by Uchida et al [20] would be the neural path used.
Present results show that removing the right ovary (left ovary in situ) on D1 resulted in lower P4 level, but removing the left ovary (right ovary in situ) results in Table 3 Mean ± SEM of progesterone (ng/ml), testosterone and estradiol (pg/ml) serum levels in rats with unilateral sectioning of the SON hormone levels similar to animals with sham surgery treatment. The results suggest that the increase in P4 levels observed in animals with sham surgery treatment depend on the secretion activity by the right ovary.
Since sectioning the left SON on D1 or D2 resulted in lower P4 levels than in sham surgery treated rats, we propose that the neural reflex elicited by the sham surgery arrives to the left ovary through the left SON. The decrease in P4 levels could be explained by the decrease in ovarian NA and VIP quantities, since the unilateral or bilateral sectioning of the SON results in an acute decrease in NA levels [17,33], and both neurotransmitters stimulate ovarian P4 secretion [48]. As indicated by the present results, rats with right-ULO (left ovary in situ) on E showed higher P4 levels than animals in the sham-surgery treatment group. We suppose that the increase in P4 secretion originates in the adrenals. Since the noradrenergic nerves arriving to the ovaries and the adrenals originate at the CSMG [49], it is possible that some kind of neural information arising from the left ovary is carried to the CSMG resulting in the stimulation of the nerves innervating the adrenals. Since sectioning the left or right SON also resulted in higher P4 levels, we suppose that on the day of E the neural information carried by both SONs regulates P4 secretion in an inhibitory way, which does not include NA and VIP as neurotransmitters.
Removing the left ovary modified the way P4 secretion is regulated by the right ovary; since sectioning the right SON on P or E resulted in higher P4 levels than in ULO rats with the right ovary in situ. A similar effect occurred in rats with the left ovary in situ with section of the left SON at P.
According to Odell and Parker [50], the major adrenal androgens are dehydroepiandrosterone (DHEA), dehydroepiandrosterone sulphate (DHEAS), and A4; which are converted into androgen and estrogen by steroidogenic enzymes in the peripheral tissues [51]. In the rat, T secretion by the adrenals is limited [52]; and thus, the changes in hormone levels observed in response to ULO or denervation must be explained by modifications in the ovarian capacity to secrete T. In rats with ULO treatment the SON plays a stimulatory role in secreting T by the left ovary, while for the right ovary, the SON plays a stimulatory on D1 and an inhibitory role on E. Such difference may result from changes in the sensitivity of the ovary to the effects of LH and/or by the stimulation of P450 aromatase activity. On D1, ULO rats with a denervated ovary had lower T levels and higher E2 levels than rats with just ULO treatment.
At the day of estrus, the right SON plays an inhibitory role in ovarian E2 secretion [40]. ULO acute effects on E2 levels were observed on D1 for the right ovary and on P for the left ovary. For the right ovary the changes in E2 secretion depend on the innervations provided by the SON, but it does not for the left ovary, suggesting that the left and right ovaries have different kinds of regulation.
In vitro studies show that the ovaries'ability to secrete hormones changes with the presence or absence of a diverse group of neurotransmitters [21,[53][54][55]. Then, the acute effects of ULO or sectioning the SON seem to influence the activity and/or the expression of enzymes participating in the synthesis of steroid hormones produced by the ovaries.
In ULO rats, sectioning the SON of the in situ ovary results in a different response than that observed in animals with both ovaries; suggesting that the effects of ULO result from the modifications in ovarian hormone levels, and the alterations on the neural information arising from the extirpated organ [1].
The variations in the asymmetrical performance by the right and left ovaries during the estrous cycle could be related to changes in the neural information received by the ovaries from the CSMG. Morán et al [56] showed that the neural connections between the CSMGs and the right and left ovaries show a mirror-image vary along the estrous cycle, and that the left ovary, but not the right one, has connections with both CSMGs. Furthermore, the SON is the main neural pathway connecting the ovaries to the CSMG [48].
Mortality for neurological or mental diseases is higher in women who underwent bilateral oophorectomy before age 45 years compared with referent women and it is not attenuated by estrogen treatment from the time of oophorectomy [57].
In women of reproductive age, the most common endocrinopathy is polycystic ovary syndrome (PCOS). There is evidence that PCOS is associated with increased sympathetic nerve activity, since repeated low-frequency electroacupuncture treatment induced regular ovulation in women with PCOS [58], inhibited hyperactivity in the sympathetic nervous system [59] and improved hyperandrogenism [58,60]. Bilateral sectioning of the SON to rats with PCOS, induced by estradiol valerate, restored ovulation [61], while unilateral sectioning of the SON restored ovulation in the innervated ovary [62]. The neural reflexes induced from the abdominal skin to the ovaries affect ovarian blood flow and SON activity [20] and low-frecuency electro-acupuncture treatment is achieved by acting on the wall of the abdomen. Then, it is possible that their effects could be related with changes in the SON activity. Such possibility is based in present results (acute changes in P4, T and E2 levels induced by ovarian denervation and elimination of one of the neural communication between the ovary and the CNS) and Kagitani et al. [40] results showing that SON stimulation results in a decrease in E2 secretion Taken together, present and previous results indicate that the mechanism regulating steroid hormones secretion by the ovaries is different for each hormone and for each ovary. Endocrine signals originate from, and arrive to, the ovaries; the ovaries receive and send neural information that is processed in the CNS and participate in regulating the secretion of gonadotropins related to the regulation of ovarian functions. Figure 7 Comparative effects on progesterone, testosterone and estradiol. Means ± SEM of comparative effects of unilateral section of the SON on progesterone, testosterone and estradiol between rats with unilateral section of the SON and rats with unilateral ovariectomy (OVX) followed by the unilateral section of the SON. * p < 0.05 vs. section (Student's test). | 6,003.2 | 2011-03-18T00:00:00.000 | [
"Biology"
] |
Mitochondrial aconitase and citrate metabolism in malignant and nonmalignant human prostate tissues
Background In prostate cancer, normal citrate-producing glandular secretory epithelial cells undergo a metabolic transformation to malignant citrate-oxidizing cells. m-Aconitase is the critical step involved in this altered citrate metabolism that is essential to prostate malignancy. The limiting m-aconitase activity in prostate epithelial cells could be the result of a decreased level of m-aconitase enzyme and/or the inhibition of existing m-aconitase. Earlier studies identified zinc as an inhibitor of m-aconitase activity in prostate cells; and that the depletion of zinc in malignant cells is an important factor in this metabolic transformation. However, a possibility remains that an altered expression and level of m-aconitase enzyme might also be involved in this metabolic transformation. To address this issue, the in situ level of m-aconitase enzyme was determined by immunohistochemical analysis of prostate cancer tissue sections and malignant prostate cell lines. Results The immunocytochemical procedure successfully identified the presence of m-aconitase localized in the mitochondrial compartment in PC-3, LNCaP, and DU-145 malignant prostate cell lines. The examination of prostate tissue sections from prostate cancer subjects demonstrated that m-aconitase enzyme is present in the glandular epithelium of normal glands, hyperplastic glands, adenocrcinomatous glands, and prostatic intraepithelial neoplastic foci. Quantitative analysis of the relative level of m-aconitase in the glandular epithelium of citrate-producing adenomatous glands versus the citrate-oxidizing adenocarcinomatous glands revealed no significant difference in m-aconitase enzyme levels. This is in contrast to the down-regulation of ZIP1 zinc transporter in the malignant glands versus hyperplastic glands that exists in the same tissue samples. Conclusion The results demonstrate the existence of m-aconitase enzyme in the citrate-producing glandular epithelial cells; so that deficient m-aconitase enzyme is not associated with the limiting m-aconitase activity that prevents citrate oxidation in these cells. The level of m-aconitase is maintained in the malignant cells; so that an altered enzyme level is not associated with the increased m-aconitase activity. Consequently, the elevated zinc level that inhibits m-aconitase enzyme is responsible for the impaired citrate oxidation in normal and hyperplastic prostate glandular epithelial cells. Moreover, the down-regulation of ZIP1 zinc transporter and corresponding depletion of zinc results in the increase in the activity of the existing m-aconitase activity in the malignant prostate cells. The studies now define the mechanism for the metabolic transformation that characterizes the essential transition of normal citrate-producing epithelial cells to malignant citrate-oxidizing cells.
Background
In prostate cancer (PCa), malignancy develops mainly from the glandular epithelium of the prostate gland peripheral zone. A major and persistent characteristic that distinguishes normal prostate tissue from malignant prostate tissue is the extraordinarily high citrate content of the former versus the low citrate content of the latter [for reviews see [1][2][3][4]]. The normal secretory epithelial cells have the specialized function of production and secretion of extraordinarily high levels of citrate. To achieve this capability, these "citrate-producing" cells posses a unique limiting m-aconitase enzyme activity that impairs citrate oxidation. In malignant cells, m-aconitase activity is not limiting and citrate oxidation is not impaired. This metabolic transformation of normal citrate-producing cells to citrate-oxidizing malignant cells is an essential event in the development of prostate malignancy. Also, benign prostatic hyperplasia (BPH) involves the proliferation of citrate-producing glands. These are consistent relationships that have been corroborated and established by in situ magnetic resonance spectroscopy imaging of the human prostate; which is now the most reliable procedure for the identification and localization of malignant loci in the prostate gland [for reviews see [4][5][6]]. Consequently, it is essential to establish the mechanism of impaired citrate oxidation in the normal secretory epithelial cells, and the alteration of citrate production associated with the metabolic transformation in the malignant cells.
In normal mammalian cell intermediary metabolism maconitase typically exists in excess and is not a rate-limiting or regulatory enzyme, and catalyzes the equilibrium reaction:
citrate ←→ 4 cis-aconitate ←→ ~8 isocitrate.
This results in a characteristic citrate/isocitrate ratio ~10/1 for most mammalian tissues, regardless of the citrate concentration. In contrast, citrate-producing normal prostate glands and hyperplastic glands exhibit a citrate/isocitrate ratio ~30/1; which is indicative of a limiting m-aconitase activity [1,7]. This is substantiated by the impaired citrate oxidation but not isocitrate oxidation by citrate-accumulating prostate cells [8,9]. A limited m-aconitase activity could be the result of an inhibition of the enzyme and/or a decrease in the level of the enzyme. Previous studies established that the accumulation of high zinc levels, which occurs in normal prostate cells [3,10,11], results in the inhibition of m-aconitase activity and in a shift of its equilibrium toward citrate [12,13]. In PCa, the malignant prostate cells do not accumulate zinc, which leads to the expectation that m-aconitase activity is not inhibited in these cells and citrate oxidation occurs. In a recent study involving measurements with prostate cancer tissue sec-tions (14), we demonstrated that ZIP1 zinc transporter is down-regulated and zinc levels are depleted in the glandular epithelium of adenocarcinomatous glands. However, an alternative or additional consideration is the possibility that the m-aconitase enzyme level might be low in citrate-producing normal prostate cells, and/or the enzyme might be over-expressed in malignant cells. Therefore, it was important to determine the level of maconitase enzyme in human prostate tissue samples and compare its level in malignant and nonmalignant glands. Moreover, this study was conducted with the same samples as in our previous report; so that the differences of changes in ZIP1 levels can be contrasted with the present results of m-aconitase levels. This report provides the first identification of m-aconitase enzyme in malignant and nonmalignant human prostate glandular epithelium.
Results
Costello et al [15] reported that the m-aconitase antibody employed in this present study was specific for the mitochondrial isoform and was not reactive with cytosolic (c-) aconitase. This conclusion was based on positive immunoreactivity (Western blots) with extracts of isolated mitochondrial preparations, and negative reactivity with cytosolic extract preparations. However, for the present immunohistochemistry studies, it was essential to establish that intra-mitochondrial m-aconitase could be detected and that the antibody reaction was limited to the mitochondria. To establish this, immunofluorescence analysis was conducted with LNCaP cells in conjunction with mitochondrial staining with Mitotracker. Figure 1 shows that the m-aconitase immunoreactivity was associated with and specific for the mitochondrial compartment, and that the antibody is selective for the maconitase isozyme.
We then proceeded to analyze human prostate tissue sections for m-aconitase immunoreactivity in malignant versus nonmalignant foci. Figure 2 presents the representative immunohistochemical detection of m-aconitase in BPH, malignant, normal, and PIN foci. The results show that m-aconitase is present in the glandular epithelium of all the glands regardless of the pathological state. Also, the cellular level of m-aconitase is consistently lower in stromal tissue than in glandular epithelium. Most importantly, immunopositive detection of m-aconitase is evident in the malignant glands as well as in the normal glandular epithelium. Table 1 is the summary of the immunohistochemical scoring of m-aconitase reactivity of tissue sections from 22 cases of prostate cancer. The table presents the two parameters employed for quantitation of m-aconitase enzyme level: the percent of cells within a gland that exhibited maconitase immuno-positivity; and the cellular level of m-aconitase as represented by the m-aconitase-positive dots (mitochondria) within the cells. In this study, we restricted the analysis to the comparison of m-aconitase in glands located in the same tissue section. One reason was to eliminate, or at least minimize, any potential differences arising from the antibody diffusion into the cells and into the mitochondria for immunoreactivity. Thus, any comparative differences observed in the level of immunoreactivity in the different glands of the same tissue section would be due to comparative differences in the level of m-aconitase.
It is apparent from Table 1 that m-aconitase immunoreactivity is essentially the same for BPH glandular epithelium and adenocarcinomatous glands. BPH glands, like normal peripheral zone glands are citrate-producing glands; whereas adenocarcinomatous glands are citrate-oxidizing glands. Also, there is no correlation between the tumor grade and m-aconitase immunoreactivity. This is consistent with the fact that the decrease in citrate in malignant cells occurs very early in malignancy and persists through the progression of malignancy [4][5][6]. Although the number of cases in which tissue samples that contained normal glands and PIN (believed by many to be a precursor stage of malignancy) was insufficient for statistical analysis, these glands exhibited m-aconitase scoring similar to each other and similar to BPH and adenocarcinoma. These results show that alteration in the level of m-aconitase enzyme (i.e. altered m-aconitase expression/biosynthesis) is not associated with the altered citrate oxidation or production in malignant versus non-malignant cells. This is in contrast to our earlier study [14] of changes in the level of ZIP1 transporter in sections from the same tissue samples employed in this current report. For comparison, Table 1 shows those earlier results. The absence of a change in m-aconitase enzyme level is evident in the same glands that exhibit a marked down-regulation of the level of ZIP1 transporter in malignant versus non-malignant glands. Moreover, accompanying the down-regulation of ZIP1 is a corresponding decrease in cellular zinc levels [14]. Thus the absence of a demonstrable change in maconitase enzyme level is not due to some artifact effect.
For additional corroboration of these results, we determined the expression of m-aconitase in LNCaP, DU-145, and PC-3 malignant prostate cell lines and the effect of zinc treatment on the level of m-aconitase in malignant prostate cell lines. The results (figure 3) show that m-aconitase is expressed in all the malignant cell lines. It is notable that the level of m-aconitase in untreated LNCaP cells is higher than PC-3 cells although citrate oxidation by LNCaP cells is negligible as compared to high citrate oxidation by PC-3 cells [16,17]. This is due to the higher accumulation of zinc in LNCaP cells that inhibits m-aconitase activity [16]. Exposure of the cells to zinc had no effect on the level of m-aconitase (figure 3). However, the conditions employed results in cellular accumulation of zinc (18,19), which inhibits m-aconitase activity and citrate oxidation. Therefore, these studies establish, for the first time, that it is the accumulation of inhibitory levels of zinc on m-aconitase activity, and not limiting levels of maconitase enzyme, that prevents citrate-oxidation in prostate cells.
Discussion
In typical mammalian cell metabolism, citrate is an essential intermediate for oxidation via the Krebs cycle and subsequent ATP production from coupled phosphorylation. Citrate must be converted to isocitrate by the action of maconitase as the entry step for its oxidation via the Krebs cycle. It is important in the intermediary energy metabolism of mammalian cells that m-aconitase is not a limiting reaction that would essentially truncate the Krebs cycle. Therefore, the constituitive m-aconitase activity is typi-Subcellular localization of m-aconitase cally in excess in mammalian cells, which insures an adequate flux of citrate to isocitrate for oxidation as follows: However, in normal prostate epithelial cells citrate is predominantly an end-product of intermediary metabolism, rather than an utilizable intermediate as in most other cells. It is well established that citrate-producing prostate cells and their mitochondria readily oxidize isocitrate, but not citrate [7][8][9]20]. These collective observations establish that the accumulation of citrate in citrate-producing prostate cells is predominantly due to a limiting m-aconitase activity (for additional supporting evidence see [1][2][3][4]21]. Therefore, the possibility existed that the constituitive level of m-aconitase enzyme in prostate cells might be uniquely low and a contributing factor in the limited maconitase activity. However, several studies with rat prostate cells, pig prostate cells and human prostate cell lines consistently demonstrated that the expression and level of m-aconitase is not lower in citrate-producing cells as com- pared to that found in typical citrate-oxidizing cells [15,[22][23][24]. While this prostate m-aconitase relationship has been established in animal studies and in cell culture studies, the level of m-aconitase enzyme in human prostate tissue had never been determined. The present studies reveal that the constituitive level of m-aconitase is essentially the same in BPH glandular epithelium and normal peripheral zone glandular epithelium (both being citrateproducing glands), and the adenocarcinomatous glands (citrate-oxidizing glands) and PIN (presumed early malignant stage).
The absence of an altered expression of m-aconitase is in marked contrast with changes in the expression of ZIP1 zinc uptake transporter and in cellular zinc levels that we recently reported [14]. The elimination of the involvement of altered m-aconitase enzyme level leads to the conclusion that the rate-limiting m-aconitase activity is due to an inhibition of the enzyme. It is well established that normal peripheral zone glands and BPH glands accumulate extremely high zinc levels in association with their unique ability to produce and accumulate extremely high citrate levels; and that, in PCa, the malignant glands exhibit a marked decrease in zinc levels and in citrate levels [10,11]. These relationships are consistent with the established effects of zinc in the inhibition of m-aconitase activity and citrate oxidation [12,13], which permits the unique peripheral zone glandular function (as in the prostate of other animals) of net citrate production. In PCa, the lost ability of the malignant cells to accumulate zinc eliminates its inhibition of m-aconitase activity; and thereby increases the oxidation of citrate and decreases the cellular level of citrate.
Conclusion
The present study demonstrates the existence of similar levels of m-aconitase enzyme in non-malignant citrateproducing normal and BPH glandular epithelial cells as These results, in concert with previous reports, establish the mechanism of the regulation of m-aconitase and citrate oxidation in normal human prostate glandular epithelial cells and in the metabolic transformation in the malignant cells. The inhibitory effect of elevated zinc levels on m-aconitase activity is responsible for the impaired citrate oxidation by normal and BPH citrate-producing prostate cell (reaction 1). The lost ability of the malignant cells to accumulate zinc removes the inhibition of existing m-aconitase so that citrate oxidation occurs (reaction 2). This elucidation of the mechanism associated with the metabolic transformation to citrate oxidation provides the bioenergetic and synthetic requirements that are essential for the manifestation of the malignant process of the prostate neoplastic cell. Thus, important insight into the genetic/metabolic relationships of the pathogenesis and progression of prostate cancer is now evolving, which will provide new approaches to its detection and treatment.
Human prostate tissue
Twenty-two cases of prostatic adenocarcinoma slides were obtained from RPCI that contained both adenocarcinomatous foci and adjacent benign prostatic hyperplasia. Four of the cases contained normal prostatic glands and six cases contained prostatic intra-epithelial neoplastic foci (PIN). One section of normal and another BPH were obtained from the Cooperative Human Tissue Network of the National Cancer Institute, National Institutes of Health, Bethesda, MD. The slides were coded without identification related to the patients. Institutional Review Board approval was obtained.
Immunohistochemistry
Immunohistochemistry was performed with the m-aconitase antibody (1:500 dilution) developed and described by Costello et al [15]. The immunohistochemistry protocol is described by Desouki and Rowan [25]. Briefly, the slides were de-parafinized by incubation in xylene and ascending grades of alcohol. Antigen retrieval was done by heating in 10 mM sodium citrate buffer (pH 6.0) @ 98°C, incubated in 1% hydrogen peroxide, blocked with 10% goat serum with avidin D (Vector Laboratories, Burlingame, CA), incubated with m-aconitase antibody in 10% goat serum with biotin at 4°C over night followed by incubation with secondary peroxidase labeled anti-rabbit IgG (H+L) antibody (Vector Laboratories, Burlingame, CA) in a concentration of 5 ug/ml. Color was developed by incubating slides with DAB kit (Vector Laboratories, Burlingame, CA) followed by counterstaining with Hematoxylin. One section from each prostatic tissue was stained with H&E for histological characterization of lesions and grading of tumors by a pathologist (M.M.D), according to the World Health Organization grading system [26]. Grade 1 is defined by well-differentiated glands with minimal anaplasia in which the nuclei are almost uniform with minimal variation in size and shape and few detectable nucleoli. Grade 2 is defined by moderately differentiated glands with moderate nuclear anaplasia with many nucleoli. Grade 3 is defined by poorly differentiated or undifferentiated glands showing marked anaplasia in which the nuclei showed marked variation in size and irregular shapes, vesicular, with marked abnormal mitotic figures. All sections were examined with light E600 Nikon microscope. The pictures were processed with Spot software, version 4.1, Diagnostic Instruments, Inc (Sterling Heights, MI). Western blot analysis of m-aconitase levels in human pros-tate cancer cell lines Figure 3 Western blot analysis of m-aconitase levels in human prostate cancer cell lines. The cells were exposed for 24 hours to medium supplemented with 15 uM zinc and to unsupplemented medium.
Evaluation of m-aconitase immunoreactivity
The m-aconitase enzyme level was determined by the immunohistochemistry scoring method of Desouki and Rowan [25]. Two criteria were employed: the percent of epithelial cells within the glands that exhibit m-aconitase immunopositivity; and the number of immunopositive cytoplasmic dots (aconitase-detected mitochondria) within the epithelial cells. The cells with punctuate cytoplasmic dots were considered positive for m-aconitase. Twenty to fifty randomized high power fields (oil immersion, x100) in a section from each case were evaluated. The scores employed were; negative, no positive cells, score +, <10% positive; score ++, 10-50% positive; score +++, > 50% positive. For quantifying the m-aconitase within the epithelial cells, the average number of identifiable well-defined m-aconitase immunopositive dots per cell was determined by oil immersion high power field (x100) examination of 50 cells. The Excel program was used to calculate the correlation between tumor grades and aconitase immunopositivity scores. A t-test was also conducted for statistical comparison of m-aconitase levels in BPH glands versus adenocarcinomatous glands.
Western blot analysis of m-aconitase
The level of m-aconitase in the malignant prostate cell lines was determined by Western blot analysis of cell extracts as previously described [15]. | 4,086.6 | 2006-04-04T00:00:00.000 | [
"Biology",
"Medicine"
] |
Intergenic SNPs in Obstructive Sleep Apnea Syndrome: Revealing Metabolic, Oxidative Stress and Immune-Related Pathways
There is strong evidence supporting the contribution of genetic factors to obstructive sleep apnea syndrome (OSAHS) susceptibility. In the current study we analyzed both in a clinical cohort and in silico, four single nucleotide polymorphisms SNPs, rs999944, rs75108997, rs35329661 and rs116133558 that have been associated with OSAHS. In 102 patients with OSAHS and 50 healthy volunteers, genetic testing of the above polymorphisms was performed. Polymorphism rs116133558 was invariant in our study population, whereas polymorphism rs35329661 was more than 95% invariant. Polymorphism rs999944 displayed significant (>5%) variance in our study population and was used in the binary logistic regression model. In silico analyses of the mechanism by which these three SNPs may affect the pathophysiology of OSAHS revealed a transcriptomic network of 274 genes. This network was involved in multiple cancer-associated gene signatures, as well as the adipogenesis pathway. This study, uncover a regulatory network in OSAHS using transcriptional targets of intergenic SNPs, and map their contributions in the pathophysiology of the syndrome on the interplay between adipocytokine signaling and cancer-related transcriptional dysregulation.
Introduction
Obstructive sleep apnea hypopnea syndrome (OSAHS) is a common disorder presenting with recurrent episodes of partial or complete upper airway obstruction that result in sleep disruption and intermittent hypoxemia with variable severity. On the genomic level, this pathophysiological substrate is linked with chronic upregulation of oxidative stress and hypoxia-responsive pathways and genes [1].
Individuals with OSAHS are at increased risk for cardiovascular disease, diabetes, stroke, cognitive impairment and many other disorders [2,3], and enhance morbidity and mortality rates [4].
There is strong evidence supporting the contribution of genetic factors to OSAHS susceptibility [9]. Recent genome-wide association studies have revealed many hypoxiasignaling and sleep pathways [1]. Genome-wide association studies (GWAS) in OSAHS have been employed in order to discover genomic regions containing causal variants or variants influencing the expression of genes outside the identified region (e.g., enhancers and/or expression quantitative trait loci) [8]. The pathophysiological role of intergenic regions and pseudogenes is far less elucidated in several diseases, including OSAHS [10]. Specifically, no study to date has focused on the role of intergenic SNPs in its pathophysiology.
In the current study we aimed to elucidate the role of four intergenic SNPs selected from Cade et al.'s study [8]: rs999944 (genome-level significant association with AHI), rs75108997 (associated with sleep SpO 2 ), rs35329661 (associated with event duration in females) and rs116133558 (associated with sleep SpO 2 ), previously identified by Cade et al. [8] in a clinical cohort and via a standalone in silico workflow
Materials and Methods
This study was approved by the Institutional Review Board of University Hospital of Larissa (No: 63569-24 December 2018) and written informed consent was obtained from all participants.
Study Population
Our study included consecutive patients with suspected OSAHS who underwent polysomnography (PSG) test in the Clinic of Sleep Disorders of the Respiratory Medicine Department of the University General Hospital of Larissa were invited to participate in this study. All subjects underwent an overnight laboratory-based PSG, and the apneahypopnea index (AHI) was measured in all of them. OSAHS was defined as an AHI > 5 events/hr, and daytime symptoms specific for OSAHS. Hypopnea was defined as either (a) a >50% reduction in airflow, (b) a <50% reduction in airflow associated with a desaturation of >3% or (c) a moderate reduction in airflow accompanied by an electroencephalogram (EEG) -defined arousal. Patients were grouped according to the following classification by American Academy of Sleep Medicine (AASM 2007): mild disorder (AHI: 5-15 events/h), moderate disorder (AHI: 15-30 events/h), and severe disorder (AHI > 30 events/h). AHI < 5 events/hr was diagnosed as healthy subjects. A total of 102 Greek patients with OSAHS and 50 non-OSAHS, age-and gender-matched Greek controls were included in this study.
DNA Extraction and Genotyping
Genomic DNA was extracted from whole blood by DNA isolation kit, according to the manufacturer's protocol. The purity and concentration of DNA were measured by a nanodrop spectrophotometer (Thermo Scientific, Waltham, MA, USA), with absorbance ratios ranging from 1.8 to 2.0 at the length of A260/A280. Genotyping of SNPs rs999944, rs75108997, rs35329661 and rs116133558 was performed using TaqMan single nucleotide polymorphism (SNP) genotyping technique on an ABI PRISM ® 7900 HT Fast Real-Time PCR System (Applied Biosystems, Waltham, MA, USA).
Statistical Analysis
All data analyses were performed with SPSS 23.0 (IBM Corporation, Armonk, NY, USA). Statistical significance was accepted at a level of p < 0.05. Deviation from the Hardy-Weinberg equilibrium was assessed using a chi-squared test with one degree of freedom. Statistical significance for categorical variables was assessed by the chi-squared or Fisher's exact test. OSAHS risk and severity associated with the candidate SNPs were estimated by computing the odds ratios (ORs) and their 95% confidence intervals (CIs) by logistic regression analysis, adjusting for age, gender, and BMI. The analyses were done first per allele (allelic model) and then per genotype (additive model). Post-hoc power analysis was performed via the Bioinformatics Institute's Online Sample Size Estimator (OSSE) (Available from: http://osse.bii.a-star.edu.sg/calculation2.php, accessed 16 September 2021) [11].
BLAST Analyses of Pseudogene and SNP Alignment in Intergenic Regions
SNPs mapped to pseudogenes and the corresponding pseudogenes themselves were analyzed for their potential biological activity in a multistep procedure. First, we determined that said pseudogene produced transcripts by inquiring the NHGRI-EBI GWAS catalog, a database of curated genome wide associated studies maintained by the National Human Genome Research Institute (NHGRI; Available from: https://www.ebi.ac.uk/gwas, accessed 16 September 2021) [12]. This resource allows the integration of both SNP and trait characteristics, as well as the integration of SNP data from multiple studies with other resources. As a next step, the Basic Local Alignment Sequence Tool (BLAST) [13] was used to detect overlap between a ±100 kb region [14,15] containing the intergenic SNP and pseudogenes within the same region. Subsequently, the Gene Cards database (Available from: https://www.genecards.org/, accessed 16 September 2021) [16] was mined for potential gene targets and transcription factor binding sites for each detected pseudogene.
Determination of SNP Interactions, and the Associated Genes' Biological Networks
The SNPSnap tool (available from: https://data.broadinstitute.org/mpg/snpsnap/, accessed 16 September 2021) was used in order to identify interactions between SNPs, and identify common biological functions affected downstream [17].
Based on the above analyses and the results from the GeneCards database, we constructed a network of putative interactors from (a) SNP associated transcription factors (b) downstream genes (c) ARBB1, the gene mapped to the rs35329661 SNP (3 UTR variant). This interactome that included a total of 274 genes was dubbed I A . Following the extraction of the I A interactome, we performed gene set enrichment analyses via Enrichr web service (available from https://maayanlab.cloud/Enrichr/, accessed 16 September 2021), in order to predict its biological functions [18].
Demographics, Clinical, Biochemical and PSG Characteristics of the Cohort
The main demographic and clinical characteristics of the 102 OSAHS cases and 50 healthy controls that underwent polysomnography are presented in Tables 1 and 2. There were no significant differences between the OSAHS cases and control groups with respect to age or gender (p > 0.05).
SNP Variability and Multivariate Analyses of SNP-OSAHS Associations in the Clinical Cohort and Power Calculations
Polymorphism rs116133558, rs75108997 were invariant in our study population, whereas polymorphism rs35329661 was more than 95% invariant (C/C: 95.4%; C/T: 1.3; T/C:3.3%). Polymorphism rs999944 displayed significant (>5%) variance in our study population A/G: 20%; G/G:20%) and was used in the binary logistic regression model. A binary logistic regression model adjusting for age, sex, BMI and lipid profile did not detect statistically significant associations between genotype frequency for the rs999944 SNP between OSAHS groups and controls (Table 3). Post-hoc analyses via OSSE for each SNP revealed that our study was under-powered (power < 50%).
BLAST Analyses
A BLAST analysis of the ±100 kb region containing rs116133558 revealed a hit in pseudogene AC114402.
Correspondingly, BLAST analysis of rs75108997 revealed total query cover and sequence similarity with pseudogene AL663058.5. The subsequent GeneCards Query did not reveal any significant transcription factor binding sites, or corresponding gene targets. The rs999944 displayed total query cover and sequence similarity with pseudogene AC007880.2.
Identification of SNP Gene Targets and Transcription Factors via GeneCards and SNPSEA
Interrogation of the GeneCards database revealed several gene targets and transcription factor bindings sites (TFBS) for rs11613358 and rs999944 (Table 4). Collectively, transcription factors (as genes) and gene targets were used to create a 274-gene signature that comprised interactome A (I A ), the putative functional network of these two SNPS. SNPSEA did not reveal direct interaction between the two SNPS (Supplementary File S2) Tables 5 and 6 report on the 10 first (sorted by adj.p-value) pathways resulting from GSEA on I A via Enrichr (see Supplementary File S4 for the raw data).
Discussion
In this study, we explored the potential role of four intergenic SNPs in the pathogenesis of sleep apnea, both in a clinical cohort and in silico. Among the four selected variants, only rs999944 displayed significant variance in our population. No significant associations were detected between the genotype frequency of rs999944 and OSAHS biological parameters. In silico analyses of the mechanism by which these three SNPs may affect the pathophysiology of OSAHS revealed a transcriptomic network of 274 genes. This network was involved in multiple cancer-associated gene signatures, as well as the adipogenesis pathway.
Adipocytokine Signaling in OSAHS
The exposure of adipose tissue to intermittent hypoxia (IH) is an established perturbator in the pathophysiology of OSAHS [19]. In animal models of IH exposure, IH has been shown to prime adipocytes via C/EBP, disrupting PPAR and adiponectin signalling via adipocytokine release [20]. The sole study of visceral fat transcriptomes from OSAHS patients has corroborated this model in humans. More specifically, the PPAR system and adiponectin signalling cascades were disrupted in the setting increased proinflammatory signalling [21].
Our findings corroborate the findings of these studies on both the gene-and pathwaylevel. On the pathway level, the TNF, adipogenesis and adipocytokine signatures that we detected may reflect disruption of adiponectin signalling that has been correlated to adipocyte volume [19]. On the gene level, key targets of the rs116133558 and rs999944associated pseudogenes have been previously reported as differentially expressed genes in IH models [19] and OSAHS [21], including PPARG, CEPA, CEPB, STAT3, RXRA and RXRB. Interestingly, the genotype frequency of rs999944 was independently associated with glucose levels in our clinical cohort, a relationship that could reflect decreased adiponectin signalling.
These findings indicate that the mechanism by which both intergenic polymorphisms contribute to OSAHS lies in their interaction with transcription factors and second-order phenomena, such as the disruption of regulatory networks [20], in contrast with other polymorphisms such as missense variants, where the disease association can be approached as a knock-out vs. wild type comparison.
Cancer-Related Networks in OSAHS
The epidemiological link between OSAHS and cancer is a complex one, potentially riddled with both clinical and biochemical confounders [22]. The main issue with the OSAHS-cancer connection is that both diseases are multifactorial and share both mechanisms [23] and comorbidities [24]. On the genomic level, this interplay may be further refined into sex-and site-specific correlates between OSAHS and cancer [25]. One of the few studies assessing the genetic components of OSAHS determined that cancer related pathways were affected by CPAP therapy [26].
Our findings corroborate with the latter study on both the gene-and pathway-level. Specifically, significantly enriched signatures that overlap between our studies included androgen signaling, breast cancer, and leukemia related pathways (see [20] and Supplementary Materials File S4; FDR < 0.05); Furthermore, key regulators of neoplastic processes and transcriptional dysregulation included genes such as JUN, SMAD3, MYC, HDAC1 and BRCA1 that were also part of the regulatory networks associated with rs999944.
Within this context, our findings support a model of long-range regulation by nongenic SNPs [19], and potentially add to the evolving concept of transcriptional regulation by intergenic regions and perturbations introduced by their polymorphisms [26].
Adipocytokines, Cancer and OSAHS
Adipocytokines signaling and cancer, as they arise in our analyses and are corroborated by others [19][20][21]26] should be considered as cross-talking pathophysiological substrates, further enhanced by OSAHS. The relationship between OSAHS and adipocytokines may arise from a common genetic substrate, as the recently discovered interplay between OSAHS and hypertriglyceridaemia has shown [27].
Adipocytokine cascades are well established clinical correlates with cancer [26], with obesity-related proteins in general being associated with an increased risk of breast cancer in females [28]. In this setting, adipocytokine signaling and hypoxia combined in the setting of a metabolically active adipose tissue creates favorable conditions for tumorigenesis, including sustained proinflammatory signaling and remodeling of the extracellular matrix [29].
Limitations and Strengths
Our findings should be interpreted within the conceptual framework of this study's limitations and strengths. A major limitation of our study is that the clinical cohort did not achieve sufficient sample size, reflected by the invariance of rs116133558 and rs75108997. As such, we were not able to sufficiently detect associations between genotype frequency and OSAHS parameters at the clinical level. Considering that recruitment was greatly affected due to the pandemic, this obstacle could not be overcome within the given timeframe of our study. Another important caveat regarding the clinical cohort is that as a nested study, the reference population is specific and thus any finding we could have drawn would be of reduced generalization. Furthermore, while we reconstructed a regulatory gene network from two intergenic SNPs, we cannot confirm our findings in a prospective cohort. We overcome this limitation by comparing our findings with two of the most comprehensive studies in OSAHS and we managed to validate our major findings both on the genomic and pathway level.
Conclusions
This is the first study to uncover a regulatory network in OSAHS using transcriptional targets of intergenic SNPs and map their contributions in the pathophysiology of the syndrome on the interplay between adipocytokine signaling and cancer-related transcriptional dysregulation. Further studies are needed to expand this concept in other intergenic SNPs and outline the non-genic networks governing long-range transcriptional regulation in OSAHS. Author Contributions: K.I.G., F.M., G.X., G.D.V. and C.P. was involved in the study conception and performed the design of the study. D.G.R. and G.D.V. performed the literature search, the data collection, the statistical analysis and prepared and wrote the manuscript. D.I.S. participated in data collection. K.I.G., G.X., F.M., G.D.V., C.P. and D.G.R. were involved in revising the manuscript for important intellectual content. All authors have read and agreed to the published version of the manuscript.
Funding:
The Authors received no specific funding for this work.
Institutional Review Board Statement: This study was conducted according to the guidelines of the Declaration of Helsinki and was approved by the Institutional Review Board of University Hospital of Larissa (No: 63569-24 December 2018) and written informed consent was obtained from all participants. Informed Consent Statement: All patients and controls provided written consent to participate in the study.
Data Availability Statement:
The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to containing information that could compromise the privacy of research participants.
Conflicts of Interest:
The authors declare that they have no competing interests. | 3,516.8 | 2021-09-24T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Biology"
] |
Spoken language-mediated anticipatory eye-movements are modulated by reading ability-Evidence from Indian low and high literates
Prediction appears to be a fundamental aspect of human cognition (James, 1890; Pezzulo, Hoffmann, & Falcone, 2007). People predict the outcome of others' actions as they unfold (Sebanz & Knoblich, 2009), ensemble musicians generate online predictions by simulating the concurrent productions of their co-musicians (Keller & Koch, 2008; Wolpert, Doya, & Kawato, 2003), and knowing a co-actor's task influences one's own planning and performance even in situations that do not require taking into account the other's task (Sebanz, Knoblich, & Prinz, 2003, 2005). The mere knowledge of another person's upcoming hand movements results in activation of one's own motor system even when no actual movement is seen (Kilner, Vargas, Duval, Blakemore, & Sirigu, 2004). Similarly, motor activation is observed when individuals use visual cues to prepare their own actions as well as when they use the same cues to predict others' actions (Ramnani & Miall, 2004). Even infants' motor development relies strongly on perception and knowledge of up-coming events (von Hofsten, 2004; see also Hunnius & Bekkering, 2010). Anticipatory eye movements have been reported in a great variety of tasks such as tea-making (Land, Mennie, & Rusted, 1999), sandwich-making (Hayhoe, Shrivastava, Mruczek, & Pelz, 2003), driving (Land & Lee, 1994), piano-playing Spoken language-mediated anticipatory eyemovements are modulated by reading ability Evidence from Indian low and high literates
Introduction
Prediction appears to be a fundamental aspect of human cognition (James, 1890;Pezzulo, Hoffmann, & Falcone, 2007).People predict the outcome of others' actions as they unfold (Sebanz & Knoblich, 2009), ensemble musicians generate online predictions by simulating the concurrent productions of their co-musicians (Keller & Koch, 2008;Wolpert, Doya, & Kawato, 2003), and knowing a co-actor's task influences one's own planning and performance even in situations that do not require taking into account the other's task (Sebanz, Knoblich, & Prinz, 2003, 2005).The mere knowledge of another person's upcoming hand movements results in activation of one's own motor system even when no actual movement is seen (Kilner, Vargas, Duval, Blakemore, & Sirigu, 2004).Similarly, motor activation is observed when individuals use visual cues to prepare their own actions as well as when they use the same cues to predict others' actions (Ramnani & Miall, 2004).Even infants' motor development relies strongly on perception and knowledge of up-coming events (von Hofsten, 2004; see also Hunnius & Bekkering, 2010).Anticipatory eye movements have been reported in a great variety of tasks such as tea-making (Land, Mennie, & Rusted, 1999), sandwich-making (Hayhoe, Shrivastava, Mruczek, & Pelz, 2003), driving (Land & Lee, 1994), piano-playing Spoken language-mediated anticipatory eyemovements are modulated by reading ability -Evidence from Indian low and high literates We investigated whether levels of reading ability attained through formal literacy are related to anticipatory language-mediated eye movements.Indian low and high literates listened to simple spoken sentences containing a target word (e.g., "door") while at the same time looking at a visual display of four objects (a target, i.e. the door, and three distractors).The spoken sentences were constructed in such a way that participants could use semantic, associative, and syntactic information from adjectives and particles (preceding the critical noun) to anticipate the visual target objects.High literates started to shift their eye gaze to the target objects well before target word onset.In the low literacy group this shift of eye gaze occurred only when the target noun (i.e."door") was heard, more than a second later.Our findings suggest that formal literacy may be important for the fine-tuning of languagemediated anticipatory mechanisms, abilities which proficient language users can then exploit for other cognitive activities such as spoken language-mediated eye gaze.In the conclusion, we discuss three potential mechanisms of how reading acquisition and practice may contribute to the differences in predictive spoken language processing between low and high literates.(Land & Furneaux, 1997); and appear to support subsequent visuo-motor coordination (Mennie, Hayhoe, & Sullivan, 2006).
In the domain of language processing, it has long been known that predictable words are read faster than unpredictable words (Ehrlich & Rayner, 1981;Rayner & Well, 1996;see Frisson, Rayner, & Pickering, 2005, for recent discussion).More recently, eye-tracking studies using spoken language have shown that participants can use semantic (Altmann & Kamide, 1999) and syntactic (Kamide, Scheepers, & Altmann, 2003) information to anticipate an upcoming visual referent.In one such study, for example, participants were presented with semirealistic visual scenes depicting a boy, a cake, and some toys while concurrently hearing sentences such as "The boy will move the cake" or "The boy will eat the cake".Eye movements to the cake (the only edible object in the scene) started significantly earlier in the "eat" condition than in the "move" condition (and well before the acoustic onset of "cake") which shows that participants used information retrieved from the verb to predict which object was going to be referred to next.
It appears thus that the importance of prediction for language processing has been well established and consequently (and unsurprisingly) theoretical accounts of predictive language processing have become very influential (e.g., Altmann & Mirkovic, 2009;Chang, Dell, & Bock, 2006;Federmeier, 2007;Kukona, Fang, Aicher, Chen, & Magnuson, 2011;Pickering & Garrod, 2007).One noteworthy aspect of this data however is that almost all studies on predictive language processing have been conducted with undergraduate students (but see Borovsky, Elman & Fernald, in press;Nation, Marshall, & Altmann, 2003).It is, at least, an open empirical ques-tion whether the sophisticated language-mediated prediction abilities of Western undergraduate students generalize beyond these narrow samples (see Arnett, 2008;and Henrich, Heine, & Norenzayan, 2010, who argue that the Western student participants used in most experimental studies in psychology are the 'WEIRDest' -Western Educated Industrialized Rich Democratic -people in the world and the least representative populations one can find to draw general conclusions about human behavior).
Interestingly, there is increasing evidence from other domains that people's ability to predict and anticipate upcoming events is modulated by their level of expertise on the task at hand.Much of the evidence for this comes from sports psychology.Whether this kind of evidence is considered to be relevant for an investigation of predictive language processing depends perhaps on one's own general theory of cognition but it is (at least) noteworthy that elite basketball players, for instance, predict the success of free shots at baskets earlier and more accurately than people with comparable visual experience (i.e., coaches and sports journalists, Aglioti, Cesari, Romani, & Urgesi, 2008).Similarly, expert volleyball players are superior to novice players in predicting the landing location of volleyball serves (Starkes, Edwards, Dissanayake, & Dunn, 1995).Skilled tennis players are faster than novices in anticipating the direction of opponent's tennis strokes (Williams, Knowles, & Smeeton, 2002), and karate athletes are better than spectators in predicting the target area of an opponent's attack (Mori, Ohtani, & Imanaka, 2002).Such high levels of ability appear to be due to the fine-tuning of specific anticipatory mechanisms that enable athletes to predict other's actions prior to their realization (Aglioti et al., 2008).
Here we sought to establish whether languagemediated prediction is modulated by formal literacy.We compared language-mediated anticipatory eye gaze in high literates (Indian university students with an average of 15 years of formal education) and low literates (Indian manual workers with an average of 2 years of formal education).Does the (in)ability to read and write impact on the tendency to predict which concurrent visual object a speaker is likely to refer to next?In other words, does literacy have effects which go beyond the prediction of words in written texts and increase the likelihood of predictive processing even during spoken language processing?
If prediction is central to language processing, as the empirical results with student participants and the theoretical accounts suggest, then it should be present in all proficient speakers/listeners regardless of their level of formal schooling.Even low literates experience speech every day, and this experience should, by adult age, bring their predictive ability to a ceiling level.
We studied Indian low literates who are particularly suited for such an investigation.More than 35 % of the Indian population is considered to be low literate or 'illiterate' (UNICEF, 2008).It is important to note here that Indian low literates are fully integrated within Indian society.Low literacy levels are mainly due to poverty and other socioeconomic factors rather than any cognitive impairments or difficulty with reading acquisition (see Huettig, Singh, & Mishra, 2011, for further discussion).
To make the task easy for both participant groups we chose a simple 'look and listen' task reminiscent of everyday contexts.Participants listened to simple spoken sentences while concurrently looking at a visual display of four objects on a computer screen.They were told that they should listen to the sentences carefully, that they could look at whatever they wanted to, but that they should not take their eyes off the screen throughout the experiment (Altmann & Kamide, 1999;Huettig & Altmann, 2005; see Huettig, Rommers, & Meyer, 2011, for further discussion of the method).
We chose a frequent Hindi construction which encouraged anticipatory eye gaze to up-coming target objects.These spoken sentences contained adjectives followed by the particle wala/wali and a noun (e.g., 'Abhi aap ek uncha wala darwaja dekhnge', literally: Right now you are going to a high door see -You will now see a tall door).The Hindi particle wala/wali is semantically neutral and not obligatory but frequently used for discourse purposes.Adjective (e.g., uncha/unchi, high) and particle (wala/wali) are gender-marked in Hindi and thus participants could use syntactic information to predict the target.In addition, to maximize the likelihood of observing anticipation effects, we chose adjectives which were also associatively related to the target object.We measured at what point in time in the duration of the spoken sentence low and high literates shifted their eye gaze towards the target objects.
Method
Participants 28 high literates (mean age = 24.6 years, SD = 2.3 years; 15 years mean years of formal education) and 30 low literates (mean age = 28.4,SD = 2.6; 2 mean years of formal education) were paid for their participation.All were from the city of Allahabad in the Uttar Pradesh region of India and had Hindi as their mother tongue.All had normal vision, none had known hearing problems.The study was approved by the ethics committee of Allahabad University and informed consent was obtained from all participants.
The assignment to participant groups was based on the mean number of years of formal education.High literates were postgraduate students of Allahabad university.Low literates were recruited on or around the university campus and asked whether they could read or write.All were engaged in public life and supported themselves by working, for instance, in food and cleaning services on or near the university campus.The low literacy group did not include any individuals involved in an adult literacy program.An average of 2 years of formal education in Uttar Pradesh (as in the low literacy group) tends to result in very rudimentary reading skills.To ensure appropriate participant selection a word reading task was administered to participants.96 words of varying syllabic complexity were presented.High literates on average read aloud 94.2 words correctly (SD = 1.9) whereas low literates only read aloud 6.3 words correctly (SD = 7.77).None of the participants appeared to be socially excluded, none showed any signs of genetic or neurological disease.
Materials and stimulus preparation
There were 60 displays, each paired with a spoken sentence.30 trials were experimental trials, the other 30 were filler trials.Each sentence contained a lead-in phrase ('Abhi aap ek', Right now you are going to), followed by an adjective (e.g., 'uncha', high), then the particle ('wala'/'wali') and a noun (e.g., 'darwaja', door).
A norming study was carried out to select adjectives which are strongly associated with particular (object) names.15 literate Hindi native speakers participated, none of them took part in the main experiment.Participants saw a list of 30 adjectives and were asked to write down the first 5 nouns that came to mind.The picturable noun that was produced most frequently for a particular adjective was selected (e.g., for 'uncha', high, participants produced most frequently the noun 'darwaja', door).The selected adjective (e.g., 'uncha', high) was not associated with any of the other objects in the display.Similarly, the grammatical gender of the adjective agreed only with the target but not with the distractor objects in the same display Sentences were recorded by a female native speaker of Hindi.Visual displays in the experimental trials (Figure 1) consisted of line drawings of the target object (e.g., door), and three unrelated distractors.All visual stimuli were frequent and common objects known to both participant groups.
Procedure
Participants were seated at a comfortable distance from a 17 inch monitor.A central fixation point appeared on the screen for 750 ms, followed by a blank screen for 500 ms.Then four pictures appeared on the screen.The positions of the pictures were randomized across four fixed positions of a (virtual) grid on every trial.The auditory presentation of a sentence was initiated 1000 ms later.Preview was provided so that participants had time to look at the objects.Participants were asked to perform a 'look and listen' task (Altmann & Kamide, 1999; see Huettig & McQueen, 2007, for discussion).
Data coding procedure
The data from each participant's right eye were analyzed and coded in terms of fixations, saccades, and blinks.The timing of the fixations was established relative to the onset of the adjective in the spoken utterance.Fixations were coded as directed to the target picture, or to the unrelated distractor pictures.
Results
Figure 2 shows a time-course-graph of proportion of trials with a fixation on the target object, or averaged distractors.The curves are synchronized to the acoustic onset of the spoken adjective.The x-axis shows the time in milliseconds from this onset.The calculation excluded all movements prior to the acoustic onset and therefore negative values reflect that (on average) participants moved their eyes away from objects fixated at this onset.Each data point reflects the proportion of trials with a fixation at that point in time minus the proportion of trials with fixations to that region at the acoustic onset of the adjective (see Huettig & Altmann, 2005).The average noun onset occurred 1560 ms after adjective onset.
Fig. 2 shows that high literates first shifted their eye gaze towards the target from around 800 ms after the onset of the adjective.The average duration of the adjectives was 778 ms (SD = 115).The graph thus suggests that high literates started to predict the up-coming target object well before the acoustic offset of the adjective.Fig. 2 also reveals that the participants in the low literacy group did not show a corresponding early shift in eye gaze towards the target (i.e.fixations to the target only started to diverge from the unrelated distractors around 300 ms after the onset of the noun) For the statistical analyses we computed mean fixation proportions for each type of object (target object and averaged distractor) per participant and item over a time interval starting from the acoustic onset of the adjective to 100 ms after this onset in order to obtain a baseline of fixation proportions.We can assume that fixations during this baseline time region were not influenced by information from the critical spoken adjective because of the time considered necessary for programming and initiating an eye movement (Altmann, 2011;Saslow, 1967).We calculated fixation proportions during the baseline region to adjust for any bias in overt attention to a type of object before information from the critical adjective became available.Calculating fixation proportions for the baseline time regions (and then comparing these proportions with the mean fixation proportions during subsequent 100 ms time regions) allows us to test for any shifts in overt attention to particular types of objects during times of interest.
Discussion
This study was conducted to compare languagemediated anticipatory eye gaze to visual objects in low and high literates.On hearing the biasing adjective and well before the acoustic onset of the spoken target word, high literates started to look more at the target object than unrelated distractors.Low literates' fixations on the targets only started to differ from looks to the unrelated distractors once the spoken target word acoustically unfolded.Thus high literates shifted their eye gaze towards the target objects about 1000 ms before the low literates.We used gender-marked adjectives which are highly associated with the target nouns.High literates' predictions thus relied on either associative (cf.Bar, 2007) or syntactic information.Our data do not conclusively show which type of information high literates used to anticipate the visual object.What our results do show however is that low literates did not consistently use any of the available cues to anticipate the upcoming referent.
One might argue that the anticipation effect in low literates was absent due to noise, or that they understood the sentences in exactly the same way as our highly literate participants, but somehow were less willing or able to shift their eyes to the targets.Our data show that such an account is very unlikely to be correct.The shifts in eye gaze to the target objects of the low literates were closely time-locked to the onset of the noun rather than being randomly distributed across all objects.Figure 2 shows that (taken into account the delay to initiate an eye movement) low literates shifted their eyes towards the target soon after the earliest point in time at which a fixation could reflect a response based on information in the noun.This demonstrates that low literates (as high literates) used information from unfolding spoken words to direct their eye gaze, they just did not use such information for prediction.
A further argument might be that low literates did not process adjective and particle in the same way as the highly literate participants.For instance they may simply have been unable to use the syntactic information of the adjective and the particle for prediction because they did not know that adjective, particle and noun agree in gender.We can also reject this account, our participants did not make any gender errors in their spoken language.It is important to note that our spoken materials were by no means difficult or unusual but simple declarative sentences used in every day situations by high and low literates alike.
Another argument may be that the highly literate participants guessed the purpose of the experiment (i.e.what word will come next in the sentence) and tried to behave accordingly, whereas the low literates did not.If this explanation of the data were correct, then our results may not reflect the ability to anticipate sentence continuation, but instead the ability to guess the purpose of the experiment.We believe this account to be unlikely since our lead-in phrase ('Abhi aap ek', Right now you are going to) was designed to set up an expectation that an object would be referred to.However, further research could usefully explore the extent to which anticipatory processing may be driven by task demands (e.g., in visual world experiments by the limited visual context or the instructions and in ERP experiments using written words by the artificial slow timing of the sentences).
The present group differences are also unlikely to be due to differences in familiarity with 2D representations of real objects.All our objects were line drawings of frequent and common objects familiar to both low and high literates.In a recent study we observed very high naming agreement of similar line drawings in the low literacy group (see Huettig et al., 2011, for further discussion).There is one study (Reis et al., 2001) which has reported a slight difference (approx.200 ms) in the naming latencies of line drawings between Portuguese illiterates and literates.Our participants however were given a preview of the visual display and thus a small delay in picture naming latencies could not account for the more than 1000 ms delay in shifts in eye gaze to the target objects.Indeed (as mentioned above) our data show that when low literates heard the names of the target objects they quickly shifted their eye gaze to them, which suggests that they had recognized the objects.Thus, we can reject these alternative explanations of our data.
Note that we do not suggest that illiterates and low literates never predict during cognitive processing nor do we claim that they never engage in any form of predictive processing during language processing.When listening to other sentence constructions illiterates/low literates may well be found to engage in some anticipatory processing (though our results do suggest that such context would have to be highly predictive).What we have found is that low literates do not engage in anticipatory eye gaze in Hindi adjective-particle-noun constructions.Our data suggest thus that literacy modulates predictive language processing.
How might formal literacy and language-mediated prediction be related?It has long been known that readers predict up-coming words during reading.As mentioned above, much research has demonstrated that predictable words are read faster than unpredictable words (e.g., Ehrlich & Rayner, 1981;Rayner & Well, 1996).We propose that the acquisition and practice of reading increases the likelihood of predictive processing even during spoken language processing.That is we suggest that literacy has some causal influences that go beyond the prediction of words in written texts.We conjecture that learning to read and write fine-tunes anticipatory mechanisms that involve the retrieval of associated words and the pre-activation of fine-grained (e.g., semantic and syntactic) representations of upcoming words.What could these anticipatory mechanisms be?
One possibility is that the group differences in predictive processing are related to literacy-related differences in adjective-noun associations (cf.Bar, 2007).A related possibility is that illiterates/low literates predict less during language processing because the absence of reading and writing practice in illiterates/low literates greatly decreases their exposure to low level word-toword contingency statistics.McDonald and Shillcock (2003a,b), in this regard, provided some evidence that readers make use of statistical knowledge in the form of transitional probabilities, i.e. the likelihood of two words occurring together.Moreover, Conway, Bauernschmidt, Huang, and Pisoni (2010) recently demonstrated that performance in implicit learning tasks correlated significantly with the ability to predict the last word of sentences in a written sentence-completion task.Rayner, Warren, Juhasz, and Liversedge (2004) on the other hand have argued that transitional probability effects are unlikely to survive intervening words (cf.Carroll & Slowiaczek, 1986;Morris, 1994).Moreover, Frisson et al. (2005) have questioned whether effects of low level transitional probabilities are independent from 'regular' (i.e. higher level) predictability effects (which are typically determined by the use of a Cloze task in which participants are asked to complete sentences or sentence fragments, and predictability is determined by calculating the percentage of times a particular word was given in a particular sentence).Frisson et al. (2005) replicated the findings of McDonald and Shillcock (2003) in a first experiment but, in their second experiment, when items were matched for Cloze values, no effect of transitional probabilities was found.Thus, a second possibility is that illiterates/low literates predict less during language processing simply because they have acquired less contextual knowledge than high literates.Schwanenflugel and colleagues (e.g., Schwanenflugel & Shoben, 1985) for example have argued that in highly predictive contexts more featural restrictions of up-coming words are generated in advance of the input than in low predictive contexts.These featural (e.g., semantic, syntactic) restrictions may then constrain what words are likely to come up.Individuals with no or low literacy levels, because of the absence of reading, may have had fewer opportunities to increase their general contextual knowledge and may consequently generate fewer featural restrictions of upcoming words, which in turn may result in less online anticipation.
A final possibility we would like to raise here is that reading and predictive language processing may be related to general processing speed.Reading and spoken language comprehension, for instance, differ in the amount of information that is processed per time unit (approx.250 vs. 150 words/minute).To maintain a high reading speed, prediction, arguably, is helpful if not necessary.Furthermore, it is conceivable that the steady practice of reading enhances readers' general processing speed.Salthouse (1996) for instance has pointed out that "performance in many cognitive tasks is limited by general processing constraints, in addition to restrictions of knowledge (declarative, procedural, and strategic), and variations in efficiency or effectiveness of specific processes ... it is assumed that general limitations frequently impose constraints on many types of processing and, hence, that they have consequences for the performance of a large variety of cognitive tasks" (pp.403-404).Stoodley and Stein (2006), for instance, found that dyslexics and poor readers showed a general motor slowing related to a general deficit in processing speed.Of course, this data does not tell us whether the reading problem and the slow processing speed in dyslexics are causally related.It is interesting in this regard however that low literates' shifts in eye gaze to the target objects, on hearing the acoustic information of the target word, also occurred approximately 200 ms later than for the high literates.In other words, even the 'non-anticipatory' shifts in eye gaze when the target objects were named were slightly delayed in the low literacy group.
It is important to note that these potential causal factors underlying the differences in predictive language processing between low and high literates (i.e.low level word-to-word contingency statistics, online generation of featural restrictions, general processing speed) are not mutually exclusive.In fact, they are likely to interact (e.g., a faster general processing speed may result in a greater amount of featural restrictions generated online) and of course there may be other factors, yet to be explored, which make proficient readers more likely to predict up-coming words.
Finally, we point out that our data cannot tease apart independent effects of formal schooling and learning to read and write.It is notoriously difficult to separate effects of literacy from more general effects of formal schooling since all forms of reading instruction inevitably involve (at least some) aspects associated with formal education.We believe that it is useful to draw a distinction between proximate and distal causes of the observed behaviour.Proximate causes are those which immediately lead to an observed behavior, distal causes are those which are more remote.We suggest that formal schooling is more likely to be a distal cause of the differences in language-mediated prediction between our participant groups whereas literacy is more likely to be a proximate cause.Other distal influences may include parental education, childhood nutrition, and access to medical care.More research could usefully be directed at exploring how these factors influence literacy acquisition.
Conclusions
We observed that high but not low literates showed anticipatory eye movements to concurrent target objects in Hindi adjective-particle-noun constructions.Our data is consistent with the notion that the steady practice of reading and writing enhances individuals' abilities to generate lexical predictions, abilities that help literates to exploit contextually-relevant predictive information when anticipating which object an interlocutor will refer to next in one's visual environment.Our findings highlight the need to investigate a) the degree, and b) the potential mechanisms, of anticipatory language processing in nonstudent populations.
Ramesh K. Mishra, Niharika Singh, Aparna Pandey, Centre of Behavioral and Cognitive Sciences, University of Allahabad, India Falk Huettig Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Njmegen, The Netherlands
Figure 1 .
Figure 1.Example visual display depicting the target object (door) and three unrelated distractors
Figure 2 .
Figure 2. Changes in fixation proportions on the target objects and (averaged) unrelated distractor objects for low literates and high literates.Zero on the timeline is the acoustic onset of the adjective. | 6,314 | 2012-03-15T00:00:00.000 | [
"Psychology",
"Linguistics"
] |
Privacy-aware multi-institutional time-to-event studies
Clinical time-to-event studies are dependent on large sample sizes, often not available at a single institution. However, this is countered by the fact that, particularly in the medical field, individual institutions are often legally unable to share their data, as medical data is subject to strong privacy protection due to its particular sensitivity. But the collection, and especially aggregation into centralized datasets, is also fraught with substantial legal risks and often outright unlawful. Existing solutions using federated learning have already demonstrated considerable potential as an alternative for central data collection. Unfortunately, current approaches are incomplete or not easily applicable in clinical studies owing to the complexity of federated infrastructures. This work presents privacy-aware and federated implementations of the most used time-to-event algorithms (survival curve, cumulative hazard rate, log-rank test, and Cox proportional hazards model) in clinical trials, based on a hybrid approach of federated learning, additive secret sharing, and differential privacy. On several benchmark datasets, we show that all algorithms produce highly similar, or in some cases, even identical results compared to traditional centralized time-to-event algorithms. Furthermore, we were able to reproduce the results of a previous clinical time-to-event study in various federated scenarios. All algorithms are accessible through the intuitive web-app Partea (https://partea.zbh.uni-hamburg.de), offering a graphical user interface for clinicians and non-computational researchers without programming knowledge. Partea removes the high infrastructural hurdles derived from existing federated learning approaches and removes the complexity of execution. Therefore, it is an easy-to-use alternative to central data collection, reducing bureaucratic efforts but also the legal risks associated with the processing of personal data to a minimum.
Introduction
Time-to-event analysis is a standard tool in clinical trials to model censored data [1]. In these data, the event of interest (e.g. death or relapse) is not necessarily observed until the end of the study, making usual statistical methods unemployable [2]. Time-to-event analysis is often applied in clinical trials that are designed to identify significant survival-related biomarkers or compare the efficacy of drugs [3][4][5]. As with many statistical analyses, large sample sizes are needed to produce reliable results and reduce bias. These large sample sizes are usually not available at a single institution. Therefore, different research institutions frequently participate in joint studies using a central data collection strategy. Owing to strict privacy regulations, such as the European General Data Protection Regulation (GDPR), collecting data centrally from different institutions is challenging, imposes substantial bureaucratic burdens, and might even be illegal in some cases [6,7]. Common approaches in clinical data sharing, such as deidentification or anonymization, come with a trade-off between data privacy and data quality [8]. If de-identification is not sufficiently strong, re-identification attacks can still reveal sensitive patient information [9,10]. Successful re-identification of shared anonymized data would harm data subjects in their fundamental right to privacy, thereby exposing the associated researchers to severe legal penalties. This is but one example of how crucial privacy-aware analysis of sensitive biomedical data is for the analysis of clinical studies.
Federated learning (FL) was developed to overcome these obstacles by enabling data analysis on geographically distributed data and keeping the sensitive data private [11,12]. FL allows the training of statistical models without sharing the raw data that contains private information about patients. Only summary statistics or model parameters, so-called local models, are shared with a trusted central aggregator [13]. These local models also fall under GDPR rules if they are generated from personal data. Still, FL systems can add technical security measures to make aggregation possible in a way that would not be the case with the data itself. One fundamental measure is encryption, preventing the aggregation server from being able to mount reconstruction attacks. Moreover, a combination of FL and privacy-enhancing technologies (PETs), such as additive secret sharing or differential privacy (DP), is needed to increase the privacy and security of the whole analysis, reduce the need for trust in the aggregation server, and ensure compliance with data protection laws [14][15][16]. Such a combination of FL and PETs is often called a hybrid approach. FL or hybrid implementations of various algorithms have already been shown to deliver accurate results in different biomedical applications, such as genome-wide association studies [17,18], differential gene expression analysis [19], the analysis of electronic health records [20], or the prediction of patient outcomes with COVID-19 [21].
For time-time-to-event analysis, the first privacy-preserving and federated approaches were already developed in recent years. A concept of a distributed time-to-event regression was published by Lu et al. in 2015 [22]. WebDISCO was a web platform for distributed Cox proportional hazards models without patient-level data sharing. Another approach for calculating federated survival functions using multi-party homomorphic encryption was published by Froelicher et al. in 2021, being the first hybrid approach with an enhanced focus on privacy [23]. The current approaches already show the high potential of FL for time-to-event analysis, however, they do not offer fully extensive solutions. WebDISCO is not maintained any longer and does not consider PETs, requiring a high trust in the aggregating server and making it a potential point of cyber-attack [24]. Also, it only supports the Cox proportional hazards model and no other time-to-event algorithms. Froelicher et al. strongly focused on the privacy of the raw data and the exchanged model parameters. However, while their approach offers a strong level of privacy, the resulting survival curves can still leak information about the included patients without much effort [25]. Also, they solely focused on one type of algorithm, the Kaplan-Meier estimator. Another disadvantage is that their tool is unavailable to the general public, and their implementation is not open-source. A comprehensive toolset of widely used time-to-event algorithms is needed that is straightforward to understand and intuitive to set up and use. Ideally, it should reduce technical hurdles to a minimum, achieving similar results to the centralized approaches while preserving the patients' privacy and being GDPR compliant. Furthermore, when it comes to privacy-aware methods, open-source solutions have tremendous advantages by revealing the source code and therefore increasing the trust in the software. Also, open-source software enables future maintenance, security updates, community-driven development, and code usage in other projects. From a data privacy perspective, the open-source approach has the potential to maintain privacy through faster discovery and remedy of vulnerabilities. At the same time, it poses the risk of hackers exploiting their access to the code. However, from a technical point of view, this risk is not necessarily higher than in closed-source software [26].
To address the existing problems, we propose easily applicable, privacy-aware, federated implementations of the most widely used algorithms in clinical time-to-event studies: survival function, cumulative hazard function, log-rank test, and Cox proportional hazards model. Our implementations are based on a hybrid approach of FL and additive secret sharing to increase the privacy of FL by hiding the shared local statistics and model parameters from the global aggregator [27]. We extended the federated survival function, cumulative hazard function, and log-rank test by a previously published approach to render the resulting outputs differentially private and reduce the privacy leakage of published data [28]. Moreover, we extended the federated Cox proportional hazards model to support L1-and L2 penalization, which was not supported before in WebDISCO. We demonstrate that our approach performs as well as centralized approaches. Additionally, we reproduced a multi-institutional clinical study with centralized data collection with very high similarity. All methods are accessible through the opensource platform Partea (https://partea.zbh.uni-hamburg.de), enabling complete transparency about the implementations and allowing for further maintenance and extendibility by the community. The platform provides an entire federated infrastructure and makes privacyaware multi-institutional time-to-event analysis accessible and ready for clinicians, statisticians, and bioinformaticians without deeper technical knowledge. It also incorporates PETs that represent the state-of-the-art in data privacy and ensure sufficient data protection to enable GDPR compliance even in large, complex collaborations. The entire source code is available on GitHub (https://github.com/federated-partea).
Implementation
In this work, we implemented a hybrid approach combining FL and additive secret sharing to enable privacy-aware multi-institutional time-to-event analysis without central data collection. The FL architecture consists of local clients handling sensitive data analysis at each participating site and a global aggregation server that receives the local parameters from each site to incorporate them into a common, global model. At the beginning of each project, the public keys of each site are exchanged with all other sites. After that, our workflow consists of five major steps, as illustrated in Fig 1. (1). Each site creates a secret of its exchanging parameters for each other site, which, summed up together, will reveal the actual parameters again. Each secret is encrypted by the public key of a certain site, and can therefore only be decrypted by this site. (2) The server collects all secrets and distributes them to the corresponding sites. (3) Each site decrypts the received secrets using its private key and sums them up. (4) The summed-up parameters still do not reveal any information and are sent to the aggregation server. (5) Finally, the server sums up the received sums of each site to obtain the actual global aggregate and broadcast it to the local sites. For algorithms with an iterative approach, such as the Cox proportional hazards model, the whole process from step (1) to (5) is repeated until convergence or a stopping criterion has been reached.
The main advantage of this hybrid combination of FL and additive secret sharing is that participants and the aggregating server can only see the global aggregate of the calculation. They are not able to identify or reconstruct any of the exchanged parameters by still maintaining almost identical results. With this approach, we implemented privacy-aware and federated methods for the most commonly used time-to-event algorithms in clinical trials: the Kaplan- Meier estimator for estimating the survival function [29], the Nelson-Aalen estimator for estimating the cumulative hazard function [30], the log-rank test for the comparison of two individual cohorts [31], and the Cox proportional hazards model for time-to-event regression [32,33].
Previous work has shown that it is possible to reconstruct the time of the event and event status directly from the survival function [34,35]. This potential leak in privacy also occurs in survival functions computed on centrally collected datasets. DP can be used to address this potential limitation. In DP, random noise is added to a model to hide the characteristics of individual data points. The noise level is chosen to prevent re-identification but not change the global properties of the dataset [36,37]. Therefore, we integrated the functionality of differentially private survival functions, cumulative hazard function, and log-rank test as proposed by Gondara et al. in 2020 into our approach. The authors added random Laplacian noise to the number of events, subjects at risk, and censored individuals for each time point [28].
All algorithms are accessible through the "Partea-Privacy-AwaRe Time-to-Event Analysis" platform, making them easily applicable in clinical trials. Partea consists of three main parts: (1) a global web frontend (Angular) to create federated projects, invite participants and visualize the results; (2) a local client application running on all major operating systems (Ubuntu, macOS, Windows) for local computations on sensitive data; (3) and a server for handling the data communication (Django). Through its intuitive user interface, Partea is not only applicable to statisticians or (bio)informaticians but can also be used by clinicians or biologists without programming knowledge. After creating a new study and adjusting several initial settings, the study coordinator can invite other participants by sharing unique invitation tokens. With these tokens, an invited participant can join the project through the local Partea client, choose its local dataset, and follow the progress of the federated study through the web app. After every participant has joined and the clients are running, the study coordinator can start the federated analysis through the web app. After the run, all results are available through the web app and can be explored interactively or downloaded.
Federated time-to-event analysis algorithms
2.2.1 Survival function, cumulative hazard function, and log-rank test. The survival function S(t) and cumulative hazard function H(t) are defined as: In our federated approach, each participating site k calculates the number of events d ik and the number of individuals at risk n ik locally for each time point t i and shares the resulting matrix m k with the global aggregator. The aggregator sums up d i and n i of all K sites, leading to the formula for the federated survival function, and federated cumulative hazard function H fed (t): Only counts of the observed events and of the individuals at risk are exchanged with the server to calculate the global survival function S fed (t) and cumulative hazard function H fed (t) by the global aggregator. Using the additive secret sharing scheme for data exchange, the aggregating server can only see the aggregated, global matrix m instead of all local matrices m k that are being received, leading to a similar level of privacy as the centralized approach.
We further extended the approach to allow for the comparison of different cohorts or study groups using the log-rank test. For this comparison, each site needs an additional column in their input data indicating the corresponding group or cohort. For each group c, a separate matrix m ck is calculated locally and aggregated to a global matrix m by the global aggregator. Based on this strategy, a pairwise federated log-rank test statistic X 2 fed can be calculated centrally at the aggregator using the expected (E) and observed (O) values of each group pair A and B:
Federated Cox proportional hazards model.
Further, we reimplemented the Web-DISCO [22] approach for the Cox proportional hazards model to enable federated time-toevent regression and extended it by the additive secret sharing scheme. Our implementation is based on lifelines, an open-source, state-of-the-art Python package for time-to-event analysis [38].
As WebDISCO did not address any normalization, we extended the approach with a federated z-score normalization. For this purpose, two exchanges with the server are needed. The local mean m k and the local number of samples n k for each site k and covariate are calculated and shared with the global aggregator to calculate the global mean m and share it with the local sites. Thereafter, each site uses the global mean m to compute their local ∑ i kX i −mk 2 and shares it with the global aggregator to calculate the global standard deviation. This result is broadcast to the local sites again and used to normalize their local data, resulting in the formula for federated z-score normalization: ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi P We perform the initialization similar to as was done in the WebDISCO approach. After normalization, each site initializes the model statistics based on its local data. These statistics remain the same for the entire training process and are aggregated to the initialized global statistics on the aggregation server.
Furthermore, the beta vector containing the coefficient values is initialized with zeros. In our hybrid approach, instead of sharing the distinct event times of each site k we share a common, predefined timeline. This hides the actual distinct event times of each site from the global aggregator and assures higher privacy.
Iteratively, until convergence, the global beta vector is broadcasted to the clients and the local statistics are calculated and shared again with the global aggregator: X with R i being the index set of individuals who are at risk for failure at the time i. The global aggregator then calculates the first and second-order derivates of the likelihood function, updates the beta vector according to the Newton-Raphson method, and if convergence is not achieved, a new iteration starts. We further extended the WebDISCO approach, to make use of lifelines penalized regression functionality and allow the use of both L1 and L2 penalties by specifying the l1-ratio (α) and penalty (p): After convergence, the final coefficients of the model are known and can be used to prepare the final plots and statistics, such as p-values and hazard ratios for each covariate.
Benchmark evaluation
To evaluate the performance of our approach, we ran analyses on four benchmark datasets that are commonly used in time-to-event analysis: US Veterans' Administration lung cancer study data [39] (Veteran, 137 samples), NCCTG lung cancer data [40] (Lung, 168 samples), criminal recidivism data [41] (Rossi, 432 samples), and chemotherapy for Stage B/C colon cancer trial data [42] (Colon, 888 samples). More details about the datasets can be found in the supporting information, S1 Text. Each dataset was split randomly and equally into 3, 5, and 10 parts to simulate various federated scenarios with different numbers of sites and sample sizes. For this, we simulated a federated environment using docker. Each site's local client was executed in a separate docker container to simulate network communication and different environments of the local datasets realistically.
Survival function.
We calculated the survival function for each federated scenario using the federated approach (FL) and the hybrid approach of FL and additive secret sharing (sFL). We compared it to the central survival function estimated using lifelines, a state-of-theart Python package for time-to-event analysis [38]. Both the federated and hybrid approaches resulted in identical survival functions compared to the central analysis (lifelines) for all evaluated datasets and scenarios of varying numbers of participants. The resulting survival curves are shown in Fig 2. Owing to the same underlying statistics, this also proves that our FL and sFL approach of the Nelson-Aalen estimator and the log-rank test provide identical results compared to the central analysis.
Differentially private survival functions.
We also included the functionality for differentially private survival functions and evaluated the approach by comparing DP survival functions to the actual non-DP survival functions. The main goal of this evaluation was to suggest the privacy loss metric epsilon of the DP computation for future time-to-event analyses. Note that this evaluation is independent of the federated computation, as both provide identical results. In the method for differentially private survival function estimation by Gondara et al. in 2020, they show that the sensitivity of the survival function estimation is 1. With this sensitivity and a variable privacy loss metric epsilon, the amount of noise is calculated, which is added to the survival function to guarantee a certain amount of privacy.
The smaller the privacy loss metric epsilon is, the more privacy is assured by the algorithm. For each dataset, we ran 1000 simulations with different epsilons (3, 2, 1, and 0.75). We next compared each differentially private function to the original non-DP function by applying a log-rank test to test whether two functions significantly differ from each other. A significant log-rank test means that the DP function is not similar to the actual function, making it inaccurate in clinical studies. The results of the log-rank tests are depicted in the supporting information, S1 Table. As shown in Fig 3, the smaller epsilon is, the greater the resulting survival function differs from the original non-DP survival function. This finding, and the fact that smaller sample sizes are more affected by noise, match the observations in the original publication by Gondara et al. and is a common observation when applying DP.
Using epsilons of 3 and 2 resulted in 100% non-significant differences between the differentially private and non-differentially private survival function, using the log-rank test for all datasets. Except for the two datasets with smaller sample sizes (Lung and Veteran), epsilon equal to 1 and 0.75 led to significantly different survival functions in very few cases (worst being Veteran with an epsilon of 0.75, resulting in 2.4% significantly different functions). This observation indicates that, only in some rare cases, an epsilon of 1 and smaller can lead to too much noise if the sample sizes are small. Again, our results of the DP survival function evaluation are transferable to the Nelson-Aalen estimator and log-rank test as they are all calculated using the same underlying statistics.
Based on this analysis, we suggest three predefined epsilons to reduce complexity and understandability for users: "high DP" with an epsilon of 0.75, which can be applied if more than 400 samples are available; "medium DP" with an epsilon of 1, which can be applied with more than 200 samples; for smaller sample sizes, "low DP" for which an epsilon of 3 should be used.
Cox proportional hazards model.
Similarly to the evaluation of the survival function, we simulated a federated scenario using the Cox proportional hazards model to compare the resulting logarithmized hazard ratio (HR) and its 95% confidence interval (CI). Fall datasets and the various number of participants, our federated-only approach, and the hybrid approach resulted in almost identical hazard ratios and corresponding CI for all covariates. A detailed overview of the comparison for each covariate and dataset is shown in The evaluation shows that the federated-only approach is identical to the centralized Cox proportional hazards model. The hybrid approach with additive secret sharing is slightly more inaccurate because we not only transmit the timeline of the actual samples. Instead, a time range is used, including intermediate time points not existing in the local datasets. This assures
PLOS DIGITAL HEALTH
Privacy-aware multi-institutional time-to-event studies more privacy, as what timepoints are derived from which site is inapparent in the data exchange. As we show on the four benchmark datasets, this slight inaccurateness does not influence the overall interpretation of the results, which remain close to the centralized or federated-only results. For each dataset, we compared the logarithmized hazard ratio and corresponding 95% CI of our algorithms for 3, 5, and 10 clients with the results of the centralized lifelines model. For all covariates (distinguished by colors), the federated-only (3, 5, 10) and hybrid approach (S3, S5, S10) resulted in almost identical results compared to the centralized calculation using lifelines (C). https://doi.org/10.1371/journal.pdig.0000101.g004
Reproduction of a clinical study
To show the practical benefit of our framework for actual clinical time-to-event studies, we attempted to reproduce the results of the ENGAGE-TIMI 48 study conducted by the TIMI study group [43]. The study data were collected by the TIMI study group as part of a phase three, randomized, double-blind, double-dummy, parallel-group, multi-center, multi-national study, ENGAGE-TIMI 48, and contains more than 21,000 participants [43] from initially more than 1,300 sites. ENGAGE-TIMI 48 compared two different doses of edoxaban, a direct oral factor Xa inhibitor, with warfarin to evaluate the long-term efficacy and safety in patients with atrial fibrillation. Analyses were performed using a Cox proportional hazards model comparing each edoxaban dose group to warfarin and included the two randomization stratification factors. For our analysis, we split the centralized dataset equally into 3, 5, and 10 sites. We used the federated-only and hybrid Cox proportional hazards model to reproduce the results for the five outcome variables: stroke or systemic embolic event (Stroke/see), stroke, see or death from cardiovascular causes (Cv death/stroke/see), major adverse cardiac event (Mace), Stroke, and All-cause death. 10) and hybrid Cox proportional hazards model (3S, 5S, 10S). As apparent in the plots, our results are highly similar to the centralized calculation (C). The number of sites over which the data was distributed does not play any role. This shows that the federated, as well as the hybrid Cox proportional hazard model could accurately reproduce the analysis of the ENGAGE-TIMI 48 study, indicating the high potential of our approaches for future multi-institutional timeto-event studies.
Discussion
Clinical time-to-event studies are mainly performed on centralized data from one or more institutions. If multiple institutions participate in one study, a complicated data collection strategy is needed with high bureaucratic hurdles and legal pitfalls regarding the privacy of the utilized data. Prior work has already shown the potential of privacy-aware distributed analysis techniques in time-to-event analysis. However, current approaches are not complete. They either are not accessible, only support one kind of algorithm, do not integrate PETs, or are not open-source. In this work, we introduced a hybrid approach of FL and additive secret sharing for the most widely used time-to-event analysis algorithms: the Kaplan-Meier estimator for survival functions, the Nelson-Aalen estimator for cumulative hazard functions, the log-rank test, and the Cox proportional hazards model. All algorithms are bundled in our open-source platform Partea, making them easily accessible for usage in clinical trials and increasing trust and maintenance by having a published code-base. Our analyses on several benchmark datasets and the reproduction of a previous clinical study show highly similar results compared to central time-to-event studies. Our platform Partea has the possibility of being an intuitive and privacy-aware alternative to central data collection for future multi-institutional time-to-event studies with geographically distributed datasets.
The hybrid approach of FL and additive secret sharing can currently be considered state-ofthe-art, privacy-aware, and potentially GDPR compliant. However, evaluating the GDPR compliance of machine learning systems is not trivial, owing to unclear criteria and definitions and the lack of jurisprudence. Even though the status of local and global models as personal data is still uncertain, it is very likely that the GDPR at least remains applicable to local models trained on personal data [15,44]. Likewise, the extent to which the addition of PETs such as additive secret sharing and DP is sufficient to result in GDPR compliance is not conclusively resolved. These questions will need answers from the courts or legislators in the future. The flexibility of Partea's open-source architecture allows it to rapidly be adapted and extended with community input in response to regulatory changes.
Open-source systems like Partea have further benefits compared to closed-source systems regarding maintainability and transparency. The source code is openly visible, so everyone can see how personal data is processed, increasing users' trust. This is also relevant for controllers of personal data, who are legally obligated under Article 32 of the GDPR to protect this data according to the state of art in technical and organizational measures. In the case of open source, the controller can show which PETs are used, how the program treats the data, what is sent around, and whether it holds its promises. In case of any security or privacy issues, users have a much higher chance of discovering this (legally relevant) breach and of holding the data controller accountable. Another advantage is that security gaps can be identified more quickly by the community. One downside of open-source is that it may also facilitate the identification of the existing security breaches for the attackers. However, there are indications that this does not lead to more attacks. In fact, experience has also shown that the security through obscurity of closed-source systems is very brittle [26].
In addition to these promising results, a few problems need to be considered, both technical and legal. Our hybrid scheme is only applicable if at least three sites participate in the analysis. In addition, it is apparent that the results of the hybrid approach differ slightly more from the centralized and federated-only analysis, mainly owing to the stronger privacy mechanism we implemented in this approach. Instead of sharing the exact distinct event times occurring at each site, we use a predefined timeline, including times that do not appear in the local dataset. This approach prevents the global aggregator from reading the event times of a single site. However, users can still use the federated-only approach if they trust the global aggregation server or work with non-sensitive data.
Another problem appearing in most of the federated learning tools is data harmonization. Our algorithms include several preprocessing steps for the computation (e.g. standardization of the data in the Cox proportional hazards model). Also, we allow for a detailed description of the data, such as the used time format (days, weeks, months, years) or the naming of the time and event columns. However, especially in the case of the Cox proportional hazards model, Partea expects similar datasets, meaning that besides the event and time columns, all other columns should be harmonized between different sites. The automatic harmonization of data is not trivial and out of the scope of Partea. However, this also encourages the participating sites to communicate and discuss a thorough study design upfront which might be even advantageous over an automatic data harmonization.
Our approaches can be easily extended in future work. As already mentioned, by offering an open-source platform and algorithms, Partea can be quickly adapted to potential changes in privacy regulations. Also, subsequent analyses such as checking the proportional hazard assumption based on scaled Schoenfeld residuals could be integrated in the future [45,46]. Furthermore, our platform could be easily extended by further privacy-aware time-to-event analysis implementations, such as Random Survival Forests [47] or Survival Support Vector Machines [48,49]. We believe that through the combination of extendibility through its opensource code, the strong focus on privacy, its accessibility, and its support of the most used time-to-event analysis algorithms, Partea has the potential to become the new gold standard in multi-institutional time-to-event analyses and provides various advantages to current solutions.
Supporting information S1 Text. Dataset. (DOCX) S1 Table. Log-rank test comparison of DP survival functions and non-DP survival functions. For all datasets, using an epsilon of 3 and 2 resulted in non-significant p-values of the log rank-test, indicating that the DP survival functions are still relatable to the original one.
Only for the small sample size datasets Veteran and Lung, small epsilons of 1 and 0.75 resulted in significant differences between in less than 2.5% of the curves. | 6,881 | 2022-09-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Reproductive Biology and Endocrinology Identification of Differentially Expressed Ovarian Genes during Primary and Early Secondary Oocyte Growth in Coho Salmon, Oncorhynchus Kisutch
Background: The aim of this study was to identify differentially expressed ovarian genes during primary and early secondary oocyte growth in coho salmon, a semelparous teleost that exhibits synchronous follicle development.
Background
Oocyte growth is a period of intense RNA synthesis, replication and redistribution of cytoplasmic organelles, and nutrient incorporation in oviparous vertebrates. In teleost fish, this period may encompass a significant portion of the lifespan, lasting well over a decade in some species. Despite this, research on fish oogenesis has primarily focused on vitellogenesis, final maturation and ovulation, while stages of primary and early secondary oocyte growth remain largely unexplored [1,2]. For example, it remains unclear what endocrine and/or intraovarian factors regulate oocyte growth and how this period may influence timing of puberty, fecundity, egg quality, and early embryogenesis.
Similar to primordial follicle development in mammals, primary oocyte growth in fish begins with the onset of meiosis and subsequent meiotic arrest in the diplotene stage of the first prophase. The oocytes are then completely enveloped by a monolayer of presumptive granulosa cells and a thin theca cell layer and epithelial sheath are added to the surface, forming the basic follicle structure [2,3]. As the follicle develops, the nucleus of the oocyte increases in size and numerous ribosome-producing nucleoli appear around its periphery ("perinucleolus" stage). Intense RNA synthesis occurs over this period and much of the RNA present in the fully grown oocyte is thought to be synthesized at this time [4][5][6]. During primary growth alone in fish, the oocyte volume may increase as much as 1,000-to 5,000-fold [1].
Initiation of secondary growth is signified by the appearance and accumulation of cortical alveoli (formerly yolk vesicles). These endogenously synthesized secretory vesicles, analogous to cortical granules in invertebrates and other vertebrates, are derived from Golgi bodies and play important roles in the fertilization response and early embryogenesis [7]. Upon fertilization, cortical alveoli fuse with the oocyte membrane and discharge their glycoprotein contents into the perivitelline space to prevent polyspermy and entry of microbes or pathogens. Cortical alveoli increase in number during early secondary growth, initially forming a ring around the periphery of the oocyte and then accumulating inward to the nucleus. In most fishes, a brief period of oocyte lipid deposition (lipid droplet stage) occurs late in the cortical alveolus stage and prior to significant yolk incorporation. Vitellogenesis (yolk incorporation) marks the final phase of secondary growth, during which dramatic follicle growth occurs as the oocyte sequesters vitellogenin, a hepatically derived yolk protein precursor, from the bloodstream [2].
Through recent large-scale genomic studies mainly conducted on zebrafish, salmonids, and Fugu pufferfish [8][9][10][11][12][13], a number of ovarian genes have been sequenced and catalogued in databases making it possible to identify many fish mRNAs, profile their expression, and determine their function(s). Genes involved in sex differentiation and early gametogenesis [14][15][16], and final oocyte maturation [17] have received considerable attention, while other studies have focused on specific gene families such as TGFβ superfamily members [18], zona pellucida glycoproteins [19], and vitellogenin receptor (very low density lipoprotein receptor, vldlr) [20]. Through these studies highly expressed ovarian genes, such as zona pellucida glycoprotein (zp) genes and egg lectins have been revealed. However, relatively few ovarian genes have been profiled in fish and little is known about temporal gene expression during oocyte growth.
The objective of this study was to identify differentially expressed genes during primary and early secondary oocyte growth with the ultimate goal of establishing what regulates these stages of oogenesis and identifying the intraovarian factors that drive (or block) puberty onset. Coho salmon, Oncorhynchus kisutch, was selected as a model because it is a semelparous species (spawns only once in its life and then dies) that exhibits synchronous follicle development. This unique reproductive life history allows for stage specific transcript analysis of a homogeneous clutch of follicles, which is not possible in iteroparous species like rainbow trout (O. mykiss) and Atlantic salmon (Salmo salar). Salmonids are also one of the best studied groups of fish and the large available repository of trout and Atlantic salmon ESTs facilitates identification and functional annotation of coho salmon transcripts. In this study, suppression subtractive hybridization (SSH) was conducted with ovaries containing follicles in primary or early secondary growth and putatively regulated genes identified by SSH were screened and then validated with real-time quantitative PCR (qPCR). Quantitative PCRs were developed for 17 genes identified by SSH and additional assays were developed for candidate genes of interest including known follicle cell transcripts, such as follicle-stimulating hormone receptor (fshr) and anti-Müllerian hormone (amh). Lastly, a thorough assessment of changes in the mRNA/total RNA ratio during oocyte growth was conducted to determine how gene expression results are influenced by rapid growth of follicles during oogenesis. This is germane to ovarian transcript analyses in other species and potentially other rapidly growing and/or differentiating tissues. Coho salmon (2004 and2003 brood) were reared at the Northwest Fisheries Science Center (Seattle, WA) in recirculated fresh water under a simulated natural photoperiod and fed a standard ration of a commercial diet. These salmon typically spawn in December at 3 years of age and the same cohorts (2004 brood = cohort 1, 2003 brood = cohort 2) were used for all experiments. In October 2005, female salmon (N = 43 fish of cohort 1; N = 20 fish of cohort 2) were euthanized and their ovaries removed and weighed. At this time, cohort 1 fish were 0 + age (10 months old), 72-103 mm fork length (FL) and 4.5-13.3 g body mass with an ovary mass of 0.029 ± 0.001 (mean ± SEM) and gonadosomatic index (GSI) of 0.32 ± 0.01. Cohort 2 fish were 1 + age (22 months old), 194-235 mm FL and 93.5-157.8 g body mass with an ovary mass of 0.400 ± 0.021 g and GSI of 0.33 ± 0.01. A piece of ovary from each fish was fixed in Bouins for paraffin histology [21], and the remaining tissue was snap frozen for later RNA isolation. Histology revealed that in October 2005, ovaries of cohort 1 possessed primary growth follicles at the perinucleolus (P) stage containing minimal Balbiani material and no cortical alveoli (Fig. 1). Ovaries of cohort 2 contained follicles that were early to mid-cortical alveolus (CA) stage with cortical alveoli filling greater than 50% of the ooplasm. Samples were selected for SSH that represented primary growth (P stage, cohort 1) and early secondary oocyte growth (mid-CA stage, cohort 2). Salmon used for qPCR validation of the SSH results (N = 10 fish/ stage) possessed ovaries that were in the same stages as those used for SSH. For the template RNA assessment (see below), fish were sampled as described above at a later time point in August 2006 (N = 5/cohort). At this time, fish from cohort 1 were 1 + age (20 months old) and follicles were generally late P stage with most oocytes showing very few cortical alveoli in the periphery of the ooplasm, while cohort 2 fish were 2 + age (30 months old) and oocytes were in the yolk granule stage.
Animals and sampling
Fish were reared and handled according to the policies and guidelines of the University of Washington Institutional Animal Care and Use Committee (IACUC Protocol #2313-09).
RNA isolation and cDNA synthesis
Ovarian total RNA was isolated with Tri-Reagent (Molecular Research Center, Cincinnati, OH). For SSH, total RNA pools were generated from 18 P stage and 3 CA stage ovaries (whole ovaries minus the part for fixation) and mRNA was further isolated via the PolyATract mRNA Isolation System (Promega, Madison, WI). For individual samples for qPCR, the MicroPoly(A)Purist kit (Ambion, Austin, TX) was used. Samples for qPCR were reverse transcribed with SuperScript II (Invitrogen, Carlsbad, CA) and either 100 ng mRNA or 1000 ng total RNA in 20 μl reactions.
Subtractive hybridization and analysis
SSH was performed with 1 μg of mRNA per stage using the PCR-Select cDNA Subtraction and Advantage cDNA PCR kits (Clontech, Palo Alto, CA). Subtracted cDNA samples were cloned in TOPO pCR 2.1 (Invitrogen) and plated on LB agar containing 100 μg/ml kanamycin. A total of 288 clones per library were randomly selected for sequencing with Big Dye Terminator (Applied Biosystems (ABI), Foster City, CA) on an ABI 3730 sequencer using the M13 reverse primer. Sequence chromatogram files were trimmed for quality (phred), vector-screened (cross match), and analyzed in the following manner to determine transcript identities: sequences ≤ 30 bp were removed and the remaining sequences were analyzed for redundancy with CAP3 [22], and analyzed locally using Histological sections of coho salmon ovaries with perinucleolus (A) and mid-cortical alveolus stage follicles (B) Figure 1 Histological sections of coho salmon ovaries with perinucleolus (A) and mid-cortical alveolus stage follicles (B). Panel A shows a representative ovary from cohort 1 fish and panel B shows a representative ovary from cohort 2 fish used for subtractive hybridization and qPCR validations. The scale bar = 100 μm in each panel; n, nucleoli; ca, cortical alveoli. blastx against the NCBI nonredundant (nr) protein database, blastn against the NCBI nucleotide (nt) database, blastn against the EST database (est_others), and blastx against the Gene Ontology (GO) database [23]. The evalue cutoff was 10 -5 for blast searches. Nomenclature of the Zebrafish Information Network (ZFIN) was used where possible, except for the zp genes for which we followed the recommended nomenclature of Spargo and Hope [24].
Quantitative PCR
Approximately 60 PCR primer sets were designed from sequences obtained by SSH with MacVector software (Accelrys, San Diego, CA) and undigested SSH cDNA was used as template to screen for differentially expressed genes with semi-quantitative PCR as described by Goetz et al. [25]. Genes that appeared to be differentially expressed in the initial screen were measured with real-time qPCR using primers listed in Table 1. Assays were run on an ABI 7700 Sequence Detector in 96-well plates using the standard cycling conditions: 50°C for 2 min, 95°C for 10 min, followed by 40 cycles of 95°C for 15 s and 60°C for 1 min. Reactions consisted of 1× Power SYBR Green PCR master mix (ABI), 150 nM of each gene-specific forward and reverse primer, and 0.01 ng cDNA template (based on the mRNA loaded into the RT reactions). For total RNA samples used in the template RNA assessment, 1 ng cDNA was loaded per reaction.
Triplicate standard curve samples generated from a serial dilution of pooled RNA from ovaries of 6 previtellogenic coho salmon were included in each plate. The standard curve (log input cDNA template versus cycle threshold) was linear for cDNA template ranging from 0.001-1 ng. Results were analyzed using the relative standard curve method (for details see the ABI Prism 7700 Sequence Detection System User Bulletin #2, P/N 4303859). Experimental samples were run in duplicate for the housekeeping gene (see below) and not replicated for the target genes. No amplification controls (NAC) that lacked reverse transcriptase and a no template control (NTC) that lacked template altogether were evaluated in each plate to confirm the absence of genomic DNA in the RNA preparations and the absence of PCR carryover contamination, respectively. Negative control samples showed either no detectability or negligible values (>10 Ct separation from samples). Melt curve analysis was included for each target gene to ensure that a single product was amplified. In cases where more than one product was generated the PCR was redesigned. In addition, a qPCR product from each plate was directly sequenced to verify that the target was successfully amplified. Target gene results were normalized to elongation factor-1 alpha (ef1a), which served as a housekeeping gene and has been used in previous gonadal studies [e.g., [15,26]]. Ef1a transcript levels were not different across stages when mRNA was used as template (P = 0.5). The lowest normalized value for each gene was arbitrarily set to 1 to enhance data presentation.
Template RNA assessment
The RNA composition and size of ovarian follicles change dramatically during oocyte growth, both of which can influence transcript levels depending on the method of normalization. Therefore, we compared transcript levels when measured from total RNA versus mRNA as template, correcting for follicle number and RNA recovery. Follicles were counted prior to RNA isolation and the expression of several genes was measured in total RNA and mRNA preparations from the same individual ovary samples. Briefly, fish were euthanized and body mass, FL, and ovary mass recorded. Fragments weighing ~40 mg taken from the middle region of the ovary were snap frozen for RNA analysis and an additional piece was fixed in Bouins for histology. Triplicate fragments of fresh ovary were weighed and the follicles counted under a dissecting microscope to determine the average follicle mass for each fish. Fecundity and RNA yield per follicle estimates were based on average follicle mass data. Gene expression was assessed in total RNA and mRNA preparations for a housekeeping gene, ef1a; a downregulated gene, ferritin H-3 (fth3); and an upregulated gene, fshr.
Statistical analysis
Gene expression results were compared by unpaired ttests and Welch's correction was applied in cases where variances were unequal across stages (Prism 4, GraphPad Software, San Diego, CA). The minimum level of statistical significance was set at P < 0.05.
General library statistics
The SSH library putatively enriched in cDNAs/transcripts that were more abundant at the P stage yielded 275 sequences (>30 bp) with an average size of 420 bp that clustered to form 55 contigs (of two or greater sequences with >90% identity) and 155 singletons. Approximately 80% of the sequences were identified as similar to annotated genes, while 15% had similarity to unannotated genes (entries of unknown function, e.g. hypothetical proteins) and 5% were novel (no significant identity to Gen-Bank sequences). The SSH library putatively enriched in genes upregulated at the CA stage yielded 267 sequences with an average size of 401 bp, which clustered to form 47 contigs and 103 singletons. Approximately 71% of the sequences were similar to annotated genes, 16% were similar to unannotated genes, and 13% were novel. The two predominant putative genes in the P and CA libraries with a frequency of ~10% were fth3 and serum lectin isoform 2 (lcal), respectively.
Expressed sequence tags (ESTs) were submitted to the NCBI database [GenBank: EX152144-EX152685]. Genes further assessed with qPCR are listed in Table 2.
Gene expression analysis
Zona pellucida genes, including an EST similar to factor in the germline alpha (figla), were more abundant in P stage follicles compared to CA stage follicles (Fig. 2). All genes falling into this cascade (figla, zpx, zpc, eeg) showed a similar pattern, but the fold-difference across stages was most pronounced for figla and zpx where mRNA levels were elevated ~2-fold at the P stage. Several genes associated with lipoprotein uptake and yolk processing exhibited this pattern, including vldlr, somatic lipoprotein receptor (vldlro), and cathepsin B (ctsba) (Fig. 3). Cathepsin D (ctsd) and cathepsin Z (ctsz), however, were not differentially expressed across stages when assessed with qPCR ( Fig. 3). Lipoprotein lipase (lpl) and apolipoprotein E (apoe) mRNA levels were significantly higher during the CA stage.
Genes encoding cortical alveoli components had variable expression patterns (Fig. 4). Lcal was significantly elevated in CA stage samples, whereas rhamnose binding lectin STL3 (lrham) and alveolin (alv) were not statistically different across stages.
A number of other genes not falling into the above cascades were differentially expressed across stages (Fig. 5).
Fth3, DnaJ subfamily A member 2 (dnaja2), and cyclin E (ccne) transcripts were significantly elevated at the P stage, while retinol dehydrogenase 1 (rdh1) and coatomer protein epsilon subunit (cope), showed the opposite pattern and were higher at the CA stage. Anterior pharynx defective 1b (aph1b) was not differentially expressed based on qPCR results. Finally, three transcripts likely originating from the follicle cells, fshr, amh, and gonadal somaderived growth factor (gsdf) were dramatically upregulated at the CA stage.
Template RNA assessment
The mean quantity of total RNA isolated per gram of ovary for cohorts 1 and 2 were 4750 and 1509 μg, respectively, and thus the total RNA yield was about 3 times greater per g ovary for cohort 1 (Table 3). However, because there were 40 times more follicles per tissue mass for cohort 1, the total RNA yielded per follicle was lower relative to cohort 2. The average mRNA yielded per μg total RNA was also greater for cohort 1 relative to cohort 2. Again, due to the difference in follicle number the mRNA yielded per follicle was actually lower for cohort 1. Based on mRNA GO:0016491 oxidoreductase activity † The top blast hit was cathepsin Y, which is a synonym for cathepsin Z used here. † † The top blast hit was named VHSV induced protein, but this appears to be a coatomer protein subunit based on similarity to other sequences. * The blastn similarity score is shown instead of blastx due to the sequence corresponding primarily or completely to the 3' UTR. Complementary DNA sequences for additional candidate genes were obtained from a follicle/interstitial cell enriched library [29].
Ef1a transcript levels were similar across stages based on template mRNA (0 to 1.2-fold change), but decreased (1.5 to 4.5-fold) in more advanced ovaries based on total RNA template ( Table 3). This decrease was most pronounced when total RNA based data were expressed on a per follicle basis. Other housekeeping genes, such as acidic ribosomal protein, showed results similar to ef1a (data not shown). Fth3 declined in more advanced ovaries with all means of analysis, but this decline was most pronounced with total RNA template. Fshr showed a 6.9 to 7.9-fold Abundance of zona pellucida glycoprotein-related transcripts during primary and early secondary oocyte growth in salmon Figure 2 Abundance of zona pellucida glycoprotein-related transcripts during primary and early secondary oocyte growth in salmon. Open bars denote perinucleolus (P) stage samples, while solid bars denote cortical alveolus (CA) stage samples. Each bar represents the mean + SEM of 10 fish per stage with *P < 0.05, **P < 0.01 and ***P < 0.001 indicating statistical differences. Relative transcript levels were normalized to ef1a.
Follicle stage
Abundance of lipoprotein/lipid uptake and processing tran-scripts during primary and early secondary oocyte growth Figure 3 Abundance of lipoprotein/lipid uptake and processing transcripts during primary and early secondary oocyte growth. Open bars denote perinucleolus (P) stage samples, while solid bars denote cortical alveolus (CA) stage samples. Each bar represents the mean + SEM of 10 fish per stage with *P < 0.05, **P < 0.01 and ***P < 0.001 indicating statistical differences. Relative transcript levels were normalized to ef1a.
Discussion
The goal of this study was to identify differentially expressed ovarian genes during the little studied periods of primary and early secondary oocyte growth in coho salmon. A number of differentially expressed genes were successfully identified through SSH and qPCR that are known to play important roles during oogenesis, such as zona pellucida development, sequestration and processing of lipoproteins, cell cycle control, and the fertilization response. Interestingly, some genes involved in vitellogenesis, fertilization, and embryogenesis were more highly expressed during primary growth than early secondary growth. This pattern is intriguing because proteins encoded by many of these genes are not thought to be utilized until 1-3 years later in this species. Through work mainly conducted in Xenopus it has been well documented that maternal RNAs are deposited in the oocyte early in oogenesis and then stored as messenger ribonucleoprotein particles (mRNPs), which contain proteins such as Ybox proteins and DEAD-box RNA helicases that mask RNA from the translational apparatus [5,27,28]. In the present study, the ccne transcript was upregulated during primary growth and would likely represent a maternal RNA based on its role in early embryogenesis (see below). However, other transcripts, such as vldlr and cathepsins, associated with later aspects of oogenesis itself were also highly expressed during primary growth. Although protein data are necessary to resolve this issue, one possible explanation for the early transcription of such genes is that some oogenesis related mRNAs are subject to masking during early oogenesis as described for traditional maternal RNAs.
Other transcripts such as lpl, apoe, lcal, rdh1, amh, gsdf, and fshr increased significantly during early secondary growth when CA are abundant in the oocytes, but lipid droplets are not yet evident. Some of these transcripts were likely derived from the follicle cells and showed a dramatic increase across stages (2 to 6 fold). These data together with results of previous studies discussed in detail below, suggest an increased role of the follicle cells during early secondary growth in coho salmon and potential regulation of this transition by FSH and TGFβ family peptides.
Based on GO annotation and examination of the literature, the majority of genes revealed by SSH were of oocyte origin. The SSH method employed enriches for rare transcripts through PCR steps; however it appears that copious mRNA from the oocytes overshadowed differentially expressed genes originating from the follicle/interstitial cells. This idea is supported by the observed differences in abundance of follicle cell transcripts, such as amh and gsdf, which were candidate genes obtained from a follicle/ interstitial cell enriched library [29]. As first noted by Goetz and colleagues [30] these findings suggest that it is necessary to enrich for these cell layers prior to SSH to increase the likelihood of revealing rare transcripts of the granulosa or theca/interstitial cells.
Abundance of cortical alveoli component transcripts during primary and early secondary oocyte growth
Analyzing transcript levels across stages of oogenesis
One problem with studying the ontogeny of ovarian gene expression, especially in oviparous vertebrates with large eggs, is the dramatic change in oocyte size and RNA composition that occurs during oogenesis. Changes in the size and number of follicles per tissue mass and instability of housekeeping genes complicate across stage comparisons making it difficult to interpret transcript abundance data in a biologically relevant way [e.g., [21,[31][32][33][34]]. To gain insight into this problem, we quantified RNA yields per tissue mass and per follicle, and transcript abundance in total RNA and mRNA preparations. As oogenesis progressed, total RNA yielded per tissue mass and the proportion of mRNA relative to non-polyadenylated RNA declined (Table 3). Furthermore, the yield of RNA per follicle was higher in more advanced follicles but the amount of mRNA did not increase proportional to follicle size. This change in RNA composition had a significant effect on apparent transcript levels for a variety of genes expressed in the oocyte and follicle cells depending on whether mRNA or total RNA was used as template for cDNA synthesis. Housekeeping genes, such as ef1a, were similarly expressed across stages when mRNA was used as the template, irrespective of whether data were calculated per unit template RNA or per follicle. Our results indicate that if total RNA is used as a template without normalization to a housekeeping gene one can get misleading results, such as an apparent decline in transcripts that are actually stably expressed, an accentuated decline in downregulated genes, or no change in transcripts for upregulated genes. Results were most skewed in total RNA preparations when presented on a per follicle basis and most closely resembled mRNA based results when normalized to ef1a. In summary, use of mRNA as template and normalization of qPCR data to ef1a generated results that best reflected transcript abundance within the follicle.
Zona pellucida glycoprotein genes
The zona pellucida is the acellular membrane that not only encloses the oocyte, but is also critical to optimal oocyte growth and preventing polyspermy after fertilization. The zona pellucida in vertebrates consists of highly sulfated zona pellucida glycoproteins (ZPs). Four ZP subfamilies, ZPA, ZPX, ZPB and ZPC, have been described and are each found in fish except ZPA [19,24]. Several coho salmon zp genes were revealed by SSH and all exhibited significantly elevated mRNA levels during primary growth relative to early secondary growth (Fig. 2). Consistent with this, other studies have shown that zp transcripts are abundant in oocytes during early oogenesis and make up a significant portion of the ovarian transcriptome [7][8][9]13,31].
Few studies in fish have focused on the zp-related transcription factor, figla. In mammals, however, figla is required for primordial follicle development and transcription of the zp genes [6,35]. Like the zp genes, figla is highly expressed in primary oocytes and has been localized to the ooplasm in medaka fish, Oryzias latipes [14]. In the present study, mRNA levels for both the zp genes and figla EST were higher during primary growth. The similar profile of figla and the zp genes in this study along with work in other species [35,36] suggests that transcription of this family of genes is highly coordinated.
Lipoprotein uptake and processing
Several transcripts associated with vitellogenesis, such as vldlr and vldlro, were highly expressed during primary growth and declined significantly by early secondary growth (Fig. 3). Vldlr has been widely studied and is responsible for uptake of hepatically-derived vitellogenin by the oocyte [2]. On the other hand, very little is known about vldlro, which contains an O-linked sugar domain, is expressed in the ovary and somatic tissues of some fishes [33,37,38], and is thought to mediate uptake of lipoproteins other than vitellogenin [37]. Findings of the present study are consistent with work in rainbow trout which showed that transcription of vldlr began shortly after female differentiation [39] and declined by early vitellogenesis, becoming nearly undetectable by mid-vitellogenesis [20]. Given that vldlr mRNA levels are low during much of the period of active uptake of vitellogenin, it is widely believed that the Vldlr protein is recycled during vitellogenesis [20,37,40].
Two genes, lpl and apoe, associated with lipid or lipoprotein uptake by oocytes increased 2-3 fold during early secondary growth. Lpl cleaves fatty acids from plasma lipoproteins and thus facilitates lipid transport across biological membranes, while Apoe is a lipid binding protein that mediates recognition and internalization of plasma lipoproteins by cell surface receptors. The timing of this increase is interesting considering that the CA stage just precedes the appearance of lipid inclusions within the salmon oocyte. Few studies have focused on Lpl and Apoe in the teleost ovary. However, studies in rainbow trout [41,42] demonstrated that lpl mRNA levels and Lpl activity increased steadily during vitellogenesis and peaked at late vitellogenesis. In sea bass (D. labrax), lpl mRNA was localized to the follicle cells and high lpl mRNA levels and Lpl activity coincided with the appearance of oocyte lipid inclusions [43]. Thus mounting evidence suggests that Lpl may be involved in lipid uptake associated with secondary oocyte growth in fish. To our knowledge, apoe mRNA has been measured in the ovary in only one other study in fish [15], which demonstrated expression during very early oogenesis. Together these data indicate there is increased expression of genes associated with lipid transport prior to significant lipid incorporation.
Cathepsins are lysosomal enzymes that in the oocyte are responsible for proteolytically cleaving vitellogenin into its constituent yolk proteins and play a role in oocyte reabsorption during atresia [2,44,45]. Coho salmon cathepsins revealed by SSH showed different patterns of expression during oocyte growth. The ctsb transcript was more abundant during primary growth relative to early secondary growth. Both ctsd and ctsz mRNA levels were not different across stages and would be considered false positives of SSH, which are commonly encountered with this technique [46]. Ctsd is considered the major protease responsible for cleaving vitellogenin into yolk proteins [2]. However, in the barfin flounder, Ctsb was implicated in this role [47]. In the teleost, Fundulus heteroclitus, ctsz expression was relatively stable during oocyte growth and maturation [48], and in rainbow trout ctsz was upregulated during maturation [17] and correlated with egg quality [49]. Data generally suggest that ovarian cathepsins may be regulated post-transcriptionally and thus transcript abundance may not correlate well with cathepsin enzymatic activity [42,44,50].
Cortical alveoli components
Studies in Xenopus have shown that 70% of the proteins contained in cortical granules are lectins [7]. In fish oocytes, lectins have a number of biological functions including block to polyspermy at fertilization and defense against pathogens [51]. In the present study, lcal, a Ca 2+dependent or C-type lectin, was more highly expressed at the CA stage and was the predominant transcript from the CA stage SSH library, which is not surprising given the abundance of CA evident in the ovarian histology. Indeed, C-type lectins are often highly represented in ovary transcriptomes in fish [9,16,52]. In contrast, however, two other CA components, lrham and alv, were not differentially expressed across stages. Lrham is an oocyte lectin implicated in block of polyspermy that was localized to CA of rainbow trout [53]. Alv is a metalloproteinase first identified in medaka that is released by CA after fertilization to induce zona pellucida (chorion) hardening [54]. Interestingly, when examined as a group, the CA component genes identified in the present study were not transcribed coordinately during oocyte growth as shown in rainbow trout [55].
Other ovarian transcripts
A subunit of ferritin heavy polypeptide, fth3, was the predominant transcript in the P stage library and qPCR verified that mRNA levels were elevated during primary growth. Ferritins are important because they store and transport iron atoms [56]. Since iron is a critical constituent of metalloproteins, such as enzymes and oxygen carriers, it is essential to all organisms. Free iron, however, can be highly toxic to cells and thus ferritin protects cells from the damaging effects of iron, but makes it readily availa-ble. Studies have documented high levels of ferritin subunit mRNAs in the ovary [13,49] but little is known about their specific role.
Like ferritin, dnaja2 and ccne showed significantly higher transcript levels during primary growth. Dnaja2 encodes a chaperone protein associated with unfolded protein and heat shock protein binding, while cyclins are positive cell cycle regulators that appear to be profuse in the fish ovary [30,57]. Ccne is transcribed and stored during oocyte growth in goldfish and is thought to be important to the first embryonic cell cycles [58]. The high expression of ccne during primary growth in this study is consistent with its early transcription in goldfish and suggests it is a classic maternal mRNA, as shown for some other cyclins [27].
Rdh1 and cope showed an opposite profile with transcript levels higher during early secondary growth. Retinol dehydrogenases are involved in the synthesis of retinoic acid, the active form of vitamin A, which regulates cellular growth and differentiation, embryogenesis, and reproduction in vertebrates [59]. Because of the diverse functions of retinol dehydrogenases, it is unclear what role Rdh1 may play during previtellogenic growth. However, based on the recent detection of other retinoid-related transcripts, such as retinol dehydrogenase type II and retinol binding protein in the trout ovary [11], several players in this cascade are present during oogenesis and likely play an important role during oogenesis and/or early embryogenesis.
The cope transcript encodes the epsilon subunit of a coatomer complex protein. One well characterized coatomer protein is clathrin, which is involved in receptor-mediated endocytosis and the transport of proteins from the Golgi network [60]. At this point it is not possible to determine what process the coatomer complex gene identified here is associated with, but perhaps it is involved in uptake of vitellogenin by receptor-mediated endocytosis, protein trafficking, and/or the immune response.
Transcripts for fshr, amh, gsdf, and lpl, which are all likely produced in follicle cells, exhibited the largest change in abundance across the P and CA stages with levels increasing 4-5 fold. The increase in fshr during this transition from primary to secondary growth together with previous data showing a progressive increase in plasma FSH from the P to lipid droplet stage and effects of FSH on ovarian steroidogenesis [21,[61][62][63], suggest FSH plays an important role in the endocrine control of this phase of oogenesis in coho salmon. It is not known, however, whether the expression of follicle cell transcripts such as those identified in this study is regulated by FSH and/or other endocrine or paracrine factors.
Transcript levels for two members of the TGFβ superfamily, amh and gsdf, also increased 4-5 fold from the P to CA stage. In female mammals, AMH is produced in the granulosa cells, increases at puberty onset reaching peak levels in small antral follicles, diminishes in later stages, and is no longer detectable during the FSH-dependent final stages of follicle growth or in atretic follicles [64].
Recently, the structure of Amh has been characterized in fish [65,66] and studies have primarily focused on its expression during sex differentiation [39,65]. During oogenesis in zebrafish, amh has been localized to granulosa cells where transcript levels peak at the CA stage and progressively decline at onset of yolk incorporation, reaching non-detectable levels by late vitellogenesis. Gsdf is a recently identified gonad-specific cytokine that appears to exist only in teleosts [67]. In rainbow trout, gsdf mRNA was localized to somatic cells of the genital ridge during embryogenesis and Sertoli and granulosa cells during gametogenesis [67]. Gsdf plays a role in primordial germ cell and spermatogonial proliferation, but its role in the ovary is unclear, as it did not induce oocyte proliferation in trout. Based on increased gsdf transcript levels during secondary growth in coho salmon and the ability of Gsdf to stimulate germ cell proliferation in trout, it is possible this factor plays a role in granulosa cell proliferation that occurs during this period.
Conclusion
This study sheds light on differentially expressed ovarian genes during previtellogenic oocyte growth in coho salmon and provides a platform for future studies on the regulation of this process. Major gene families represented in the SSH libraries included zp genes, lipoprotein receptors, yolk proteases, and CA components, most of which appear to be derived from the oocyte. Transcript abundance was measured for selected genes identified by SSH and candidate genes likely expressed in follicle cells, providing for some genes the first transcriptional profile during early oogenesis in fish. Interestingly, a number of oogenesis related transcripts that will not be utilized until 1-3 years later in coho salmon were highly expressed during primary growth. Clearly, further studies are necessary to determine when these mRNAs are translated and if these transcripts may be subject to masking during early oogenesis. Finally, we observed increased expression of genes encoding FSH receptor, TGFβ family peptides, proteins involved in lipid uptake, and a CA component during the transition from primary to secondary oocyte growth. This period in coho salmon coincides with increased FSH signaling. However, the degree to which FSH regulates differentially expressed genes identified in this study is not known and will be the subject of our future investigations.
Publish with Bio Med Central and every scientist can read your work free of charge | 8,062.6 | 2008-01-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
:Grayscale: An Interactive Narrative for Provoking Critical Re-flection on Gender Discrimination
Roleplaying has a history of serving diverse aims, including art, entertainment, therapeutic purposes, and political action (Piper 1973; Gygax 1978; Boal 1993; Matthews et al. 2014). This paper presents an interactive narrative called Gray-scale that uses roleplaying to provoke critical reflection through modeling workplace gender discrimination. Grayscale's interface resembles a stream-lined email system, constituting what Henry Jenkins has termed an “embedded narrative” since the narration is distributed across elements in the space (e.g. emails, notes, draft messages).
A B S T R A C T
Roleplaying has a history of serving diverse aims, including art, entertainment, therapeutic purposes, and political action (Piper 1973;Gygax 1978;Boal 1993;Matthews et al. 2014). This paper presents an interactive narrative called Grayscale that uses roleplaying to provoke critical reflection through modeling workplace gender discrimination. Grayscale's interface resembles a streamlined email system, constituting what Henry Jenkins has termed an "embedded narrative" since the narration is distributed across elements in the space (e.g. emails, notes, draft messages).
K E Y W O R D S
interactive narrative; reflection; roleplay; gender.
P A L A V R A S -C H A V E
narrativa interativa; reflexão; jogo de papéis; género.
I.CH IM E R IA
e have chosen to model gender discrimination through the Chimeria platform (Harrell 2017), a platform that supports simulation of social category membership in virtual identity systems through: (1) modeling the underlying structure of many social categorization phenomena with a computational engine and (2) enabling players to build their own creative applications about social categorization using the engine as a backbone. Chimeria simulates experiences based upon social group membership using a data-driven approach and may take multiple forms (e.g., a 2D visual novel style game (Harrell 2014), a fictitious chat narrative set in a social network (Harrell 2013), or a 3D virtual environment). In our narrative, players take on the role of a newly hired Human Resource Manager at an inhospitable corporation eponymously called Grayscale. 1 The player character is afforded some agency through the use of the company's email system (see Figure 1), including the ability to customize its interface. The narrative itself is presented as a sequence of emails, some of which the player can respond to.
The narrative's central theme is that of the struggles produced by mediated communication in the face of gendered workplace microaggressions (Basford 2013). The protagonist is melancholic, though the story uses humorous satire. The story also explores accompanying social categorizations including "activism" 2 , conformity, feminism, 3 and misogyny. As the player receives both banal and incendiary fictional emails, their character will occasionally be on the receiving end of a microagression, or made to observe a microagression experienced by a peer. When afforded the opportunity to respond, the player is able to simultaneously engage with the systematicity of disempowering gendered interactions while exploring their affective and material repercussions. The narrative is driven by other characters' reactions to the protagonist's responses. Consider the excerpt in Figure 2. Choosing the first option results in the protagonist's categorization shifting towards "activism" along a spectrum from "activism" to conformity. One character within the narrative becomes increasingly emboldened and empowered by observing such disruptions to the social order. Most characters, however, become increasingly hostile in response to threats to stability. Over the course of a single playthrough, players will experience several interactions like that of Figure 2, resulting in one of many narrative conclusions.
"If you make the decision to wear yoga pants to work, PLEASE do not lean against the counter in the break room. Some of us are trying to get work done here!'" In response, the player can choose 1 of 2 options: (1) "Please do not push the blame for your concentration problems onto others. The particular garment you take issue with is acceptable according to the official dress code." (2) "I'm sure that your needs can be accommodated in the name of productivity."
I II . C ON CL U S I ON
We use Chimeria:Grayscale to demonstrate how the Chimeria engine can be leveraged to create compelling, socially nuanced roleplay experiences. Chimeria's ability to model the specifics and dynamics of identity allow it to portray social interactions with increasing nuance. This stands in stark contrast to a large swath of games that do not highly value complex models of identity for nonplayer characters. We hope taking on roles within Chimeria will agitate players' critical awareness of the socially impactful themes raised by its narrative.
A C K N O W L E D G E M E N T S
We would like to acknowledge the efforts of Chong-U Lim, Vinnie Byrne, Laurel Carney, Sofia Ayala, Jackie Liu, and Yao Tong in making this work possible. | 1,108.8 | 2018-08-10T00:00:00.000 | [
"Computer Science"
] |
Lysosomal Abnormalities in Cardiovascular Disease
The lysosome, a key organelle for cellular clearance, is associated with a wide variety of pathological conditions in humans. Lysosome function and its related pathways are particularly important for maintaining the health of the cardiovascular system. In this review, we highlighted studies that have improved our understanding of the connection between lysosome function and cardiovascular diseases with an emphasis on a recent breakthrough that characterized a unique autophagosome-lysosome fusion mechanism employed by cardiomyocytes through a lysosomal membrane protein LAMP-2B. This finding may impact the development of future therapeutic applications.
Introduction
The lysosome is a membrane-bound cell organelle that contains digestive enzymes. Lysosomes break down many biomolecules and organelles and, therefore, are involved in a multitude of cellular processes. Thus, lysosome-mediated degradation is considered to be one of the major mechanisms by which cells degrade long-lived or damaged proteins and organelles [1]. Consistent with its key functions in maintaining cellular hemostasis, defects in lysosome functions lead to the development of many human diseases, including cardiovascular diseases [2]. Cardiovascular disease is the leading global cause of death. According to the 2018 report [3], the American Heart Association estimated that cardiovascular disease accounts for more than 800,000, or approximately one in three, deaths in the United States each year. Although there are many causative factors that contribute to the pathogenesis of cardiovascular diseases, lysosome function is tightly correlated with the progression of these diseases [4].
This review focused on the recent advancement of our understanding of the lysosome's roles in cardiovascular diseases. We then highlighted new mechanisms by which lysosomes contribute to the pathogenesis of cardiovascular diseases and discuss potential therapeutic strategies for clinical applications.
Lysosome Dysfunction and Cardiovascular Diseases
Lysosomes have long been established as the degradative end points for intracellular cargoes, including unfolded proteins, damaged organelles, and nucleic acids. The degradative function of the lysosome is accomplished by over 60 proteases, lipases, nucleases, and other hydrolytic enzymes that break down cellular cargoes for immediate recycling or storage [5]. Many cellular events, such as endocytosis and autophagy, use lysosomes as the common terminal nexus [6]. Therefore, lysosome dysfunction has detrimental impacts on many cellular processes.
Lysosome dysfunction has been characterized in numerous cardiovascular diseases [4]. Many studies have provided insights into how the lysosome functions in the context of the cardiovascular system. One major category of lysosome dysfunction that contributes to cardiovascular disorders is known as lysosomal storage disorders (LSDs). LSDs comprise a group of diseases caused by a deficiency of lysosomal enzymes, membrane transporters, or other proteins involved in lysosomal biology [7]. Many LSD patients show very severe cardiac phenotypes, including hypertrophic and dilated cardiomyopathy, coronary artery disorders, and valvular defects. The causative factors of these LSDs are various and many mutations in lysosomal genes have been identified as responsible for the disease pathogenesis. For example, Pompe disease is caused by mutations in the acid α-glucosidase (GAA) gene that lead to intralysosomal accumulation of glycogen [8]. Danon disease is caused by a deficiency in lysosomal-associated membrane protein-2 (LAMP2), a gene which encodes for a lysosomal membrane protein on chromosome X [9,10]. A deficiency in the lysosomal enzyme alpha-galactosidase A (a-Gal A) causes Fabry disease due to buildup of the fatty acid globotriaosylceramide (Gb3) in the body [11]. Mucopolysaccharidosis (MPS) is a large group of storage diseases caused by impaired lysosomal degradation of glycosaminoglycans (GAGs). Multiple mutations in genes encoding lysosomal enzymes involved in the catabolism of GAGs have been associated with MPS. Moreover, heart and valve defects are present in all MPS types and heart failure is a common cause of lethality in MPS [12]. Due to the broad range of lysosomal components that can cause LSDs, the cellular mechanisms underlying pathogenesis of different diseases could be highly disease specific. For example, impairment in autophagy, a type of lysosome-related pathway, plays a causative role in Danon disease [9]. The defective autophagosome-lysosome fusion in Danon cells causes accumulation of damaged mitochondria and, in turn, compromises the energy metabolism of these cells.
In addition to lysosomal components, regulation of lysosome biogenesis is also tightly linked to the development of cardiovascular diseases. The transcription factor EB (TFEB) is a master regulator of lysosome biogenesis [13], which coordinates expression of lysosomal hydrolases, membrane proteins, and genes involved in lysosome biogenesis. Subcellular localization of TFEB is strictly controlled. Under nutrient-rich conditions, TFEB is mainly localized in the cytosol. However, upon starvation or under conditions of lysosomal dysfunction, TFEB rapidly translocates to the nucleus and activates the transcription of its downstream targets [14]. Proteotoxicity and poor protein quality control have been observed in the vast majority of human hearts with end-stage heart failure, which may contribute to disease progression [15,16]. Pan et al. reported that myocardial TFEB expression is dysregulated in mice with advanced cardiac proteinopathy [17]. In this study, the authors found that TFEB is required for sustaining lysosome-associated cellular clearance. Moreover, TFEB overexpression is sufficient to facilitate this activity and thereby protects against misfolded protein-induced proteotoxicity. In myocardial ischemia-reperfusion (I/R) injury, programmed cell death and/or necrosis cause substantial cardiomyocyte (CM) loss. The forced expression of TFEB attenuates CM death through restoration of lysosome-associated cellular clearance [18]. Consistent with this finding, another study using a drug to boost TFEB activity also provided evidence that increased lysosome biogenesis has beneficial effects on the cardiovascular system. In a rat model of myocardial I/R injury, cilostazol (an anti-tumor drug) treatment enhanced the transportation of TFEB to the nucleus and dramatically decreased the infarct size. On the other hand, TFEB inhibition with the compound CCI-779 abolished the protective effects of cilostazol in this I/R injury rat model [19]. In most cases, blocked or reduced lysosome function can cause cardiovascular diseases. However, it has also been observed that, under certain conditions, high lysosomal activity is also deleterious ( Table 1). Hyperactivation of lysosome function has been implicated in some cardiovascular complications. The lysosomal cathepsins are a subgroup of cysteine proteases composed of 11 members (cathepsin B, C, F, H, K, L, O, S, V, X, and W). They are mainly localized within the lysosome, contributing to lysosomal degradation of cellular substrates. Under certain pathological conditions, both the expression level and subcellular localization of these cathepsins can change and become detrimental. Cathepsins have been associated with a variety of diseases, including heart failure [20]. Increased cathepsin gene expression was reported in conditions of cardiac stress, remodeling, and dysfunction [21]. Expression of cathepsins S and K was increased in the myocardium of hypertensive rodents and in humans with hypertension-induced heart failure [22]. Elevated cathepsin D was found in the plasma of patients after myocardial infarction [23]. In a clinical trial study, higher circulating serum cathepsin B was associated with an increased risk of a composite outcome of specific cardiovascular events or all-cause mortality in 4372 patients with stable coronary heart disease [24]. In a mouse MPS model, Gonzalez et al. found that cathepsin B plays a role in remodeling the extracellular matrix (ECM) and the pathogenesis of MPS type I. Secreted cathepsin B may consequently degrade the ECM in the heart and cause the cardiac phenotypes observed [25].
In addition to genetic cues, cardiotoxic chemicals, such as doxorubicin, can also induce lysosome dysfunction that in turn contribute to disease progression. Doxorubicin is a very effective and widely used chemotherapeutic drug. The anti-tumor effect of doxorubicin has been primarily attributed to its ability to intercalate into DNA, thereby blocking DNA replication and mRNA transcription [26]. However, its clinical use is limited by dose-dependent cardiotoxicity, which can lead to dilated cardiomyopathy (DCM) and congestive heart failure (CHF) [27]. Many mechanisms are associated with the cardiotoxicity of doxorubicin, including increased oxidative stress, decreased levels of antioxidants and sulfhydryl groups, inhibition of nucleic acid and protein synthesis, release of vasoactive amines, altered adrenergic function, and decreased expression of cardiac-specific genes [27]. A recent study highlighted a new mechanism by which doxorubicin impacts lysosomal function. Using both in vivo and in vitro systems [28]. it was discovered that doxorubicin inhibits lysosome function by alkalinizing lysosomal pH. The activities of most lysosomal enzymes are tightly regulated by pH. The acidic pH of the lysosome is maintained mainly through the activity of the Vacuolar-type H+-ATPase (V-ATPase). Further study identified that doxorubicin treatment suppressed the activity of V-ATPase [28]. This study provided a novel mechanism of doxorubicin cardiotoxicity and the interaction between cardiotoxic chemicals and lysosome function.
Phenotypes of Danon Cardiomyopathy
Danon disease is a lysosomal storage disorder and manifests in patients as cardiomyopathy, skeletal muscle weakness, and intellectual disability. Previous studies have discovered that Danon disease is associated with mutations in the LAMP2 gene, which is located on the human X chromosome and encodes a lysosomal membrane protein [29,30]. Due to the X-linked inherited nature of the LAMP2 gene, male patients with Danon disease usually develop the disease earlier in life (in childhood or adolescence) and display more severe phenotypes than female patients. Many male patients die of progressive heart failure at the average age of 19 without a heart transplant. However, female patients develop the disease later in life and present with less severe phenotypes [30].
Female Danon patients carrying both wildtype (WT) and mutant LAMP2 alleles (heterozygous) often display divergent clinical phenotypes. Studies using CMs derived from human-induced pluripotent stem cells (hiPSC-CMs) offer a unique platform to further study this phenomenon. Two diverse populations of hiPSCs can be generated from heterozygous female patients with Danon disease: one with WT LAMP-2 expression and the other with mutant LAMP-2 expression, which is due to the random inactivation of the X chromosome carrying the WT or mutant LAMP2 [31]. These two hiPSC-CM populations showed distinct phenotypes. Only the hiPSC-CM population carrying the mutant LAMP2 allele on the active X chromosome demonstrated the in vitro phenotypes of Danon disease. DNA methylation was correlated with X chromosome inactivation [32]. Ng and colleagues used 5-aza-2 -deoxycytidine, which inhibits DNA methyltransferase activity, to reactivate the silenced X chromosome bearing the WT LAMP2 allele. The treatment with 5-aza-2 -deoxycytidine partially restored WT LAMP-2 expression in female Danon hiPSC-CMs, leading to increased contractility and decreased accumulation of autophagosomes [33]. Reactivation of the X chromosome therefore holds therapeutic potential for female patients with Danon disease [33].
A previous report has shown that CM apoptosis could be a potential causative factor that contributes to Danon pathogenesis [34]. Hashem and colleagues reported apoptosis, which was induced by the excessive amount of reactive oxygen species (ROS) produced by the mitochondria, significantly increased in hiPSC-CMs from male Danon patients compared to the control. However, whether CM apoptosis plays a major role in Danon pathogenesis is not clear since no significant increase of hiPSC-CM apoptosis was detected in more recent studies [9,31].
Defective Autophagy Correlates to Danon Cardiomyopathy
Numerous cardiovascular diseases exhibit defects in autophagy, including Danon disease [29]. Autophagy is a process of self-cannibalization by which cells recycle misfolded proteins and damaged organelles. The resulting breakdown products serve as inputs for energy metabolism and allow cells to produce more energy to deal with starvation or stress. Three main types of autophagy have been characterized thus far: microautophagy, chaperone-mediated autophagy (CMA), and macroautophagy. Microautophagy involves the direct uptake of soluble or particulate cellular cargoes into lysosomes via invagination, protrusion, or septation of the lysosomal membrane. Microautophagy is the least studied form of autophagy [35,36]. The detailed mechanism of microautophagy and its contribution to the cardiovascular system remain elusive. CMA, on the other hand, has been well studied and has been reviewed extensively [37,38]. While the extent of CMA in the cardiovascular system is not fully understood, several proteins with known function in the heart have been identified as CMA substrates, including calcineurin (RCAN1), ryanodine receptor 2 (RYR2) and myocyte enhancer factor 2D (MEF2D) [39][40][41].
In contrast to microautophagy and CMA, macroautophagy has been shown to play key roles in the cardiovascular system, demonstrating both cardioprotective and maladaptive roles in disease [42,43]. Macroautophagy is an intracellular process that relies on the formation of the autophagosome, a double membrane vesicle, which carries cellular cargoes to lysosomes. Autophagosomes then fuse with lysosomes to form autolysosomes in which these cellular cargoes are degraded by lysosome-derived acid hydrolases [44]. This review focused on this form of autophagy, macroautophagy (hereafter referred to as autophagy). Cardiac fibroblasts, CMs, endothelial cells, and vascular smooth muscle cells are the major cellular constituents of the heart [45]. Studies focused on cell types other than CMs have shed light on how autophagy activity contributes to the progression of cardiovascular diseases. For example, in endothelial cells, autophagy involves the regulation of cell survival, nitric oxide production, angiogenesis, and haemostasis/thrombosis [46]. Unlike the other three major cell types in the heart, CMs have very limited regenerative capacity due to their postmitotic state [47]. The house-keeping function provided by autophagy is particularly critical for post-mitotic and terminally differentiated cells, including CMs. These cells must survive for many years and cannot dilute the accumulation of cellular waste by cell division due to low turnover rate. Therefore, dysregulation of autophagy is detrimental to these cells and contributes to the development of many diseases. Due to the scope of this review, we mainly focused on discussing the contribution of autophagy in CMs.
The cardiovascular system has one of the highest energy demands in the body, consuming as much as 440 kcal per kg per day [48]. Unlike skeletal muscle, heart muscle functions almost exclusively aerobically, as evidenced by the density of mitochondria in CMs. Glycogen, one of the key energy sources in response to cardiac metabolic stress, is degraded via autophagy to be used as a substrate for glucose-mediated ATP production [49,50]. Therefore, autophagy is critical for the heart to maintain the health of its mitochondria and produce enough energy for contractile function.
One of the pathological hallmarks of Danon disease includes the accumulation of intracellular autophagic vacuoles containing glycogen within CMs [51]. Moreover, muscle biopsies from Danon patients and iPSC-CMs derived from Danon patients exhibited defects in autophagic flux [29,31,52]. We recently examined the gene expression profiles of six hiPSC-CM lines derived from three unrelated male Danon patients, two unrelated male healthy controls, and one isogenic LAMP-2 knockout (KO) cell line generated from one of the control lines by CRISPR-Cas9 genome editing technology [9]. Danon hiPSC-CMs recapitulated phenotypes observed in Danon patients and previous studies using Danon hiPSC-CMs, including glycogen accumulation in autophagosome-like vacuoles, suggesting impaired augophagosome-lysosome fusion. Gene ontology analysis demonstrated that 150 out of 420 differentially expressed genes with known functions were involved in metabolic processes, indicating a dramatic metabolic defect in Danon cells. To further understand this metabolic defect, we studied the relationship between defective autophagosome-lysosome fusion we observed in Danon hiPSC-CMs and mitochondrial dysfunction. Mitochondrial dynamics, including fission and fusion, are tightly linked with mitochondrial autophagy (mitophagy) and homeostasis [53]. In Danon hiPSC-CMs, we observed the accumulation of depolarized mitochondria, which in turn decreased global mitochondrial membrane potential. Fission is known to induce mitochondrial depolarization [53]. Consistent with this notion, more mitochondrial fragmentation, which was associated with a high level of the short isoform of optic atrophy 1 (S-OPA1) [54], was detected in Danon hiPSC-CMs than in the controls. These depolarized and fragmented mitochondria become the substrates for mitophagy. We then examined the energy production in Danon hiPSC-CMs. Danon cells produced significantly less cellular ATP than the controls, which is consistent with the notion that mitochondria play a central role in energy metabolism. Taken together, the accumulation of autophagosomes, fragmentation of the mitochondrial network, and reduced ATP production suggest that defects in autophagosome-lysosome fusion strongly inhibit mitophagy in Danon hiPSC-CMs. Interestingly, Danon hiPSC-CMs also displayed decreased contractile twitch force compared to controls measured using a micropost array platform [55]. In conclusion, our study, along with others, shed new light on the pathogenesis of Danon disease by bridging the defect in autophagy with the development of cardiomyopathy.
Deficiency of the B Isoform of LAMP-2 is the Main Causative Factor of Danon Cardiomyopathy
The LAMP2 gene produces 3 isoforms, LAMP-2A, LAMP-2B, and LAMP-2C, through alternative splicing. Proteins of these isoforms share an identical N-terminus (lysosome luminal domain) while having distinct C-termini, including transmembrane domains and cytosolic tails. LAMP-2A and LAMP-2B are broadly expressed in many human tissues whereas the expression level of LAMP-2C is low [56]. We examined the expression of LAMP-2 isoforms in several human cell types. LAMP-2B is the predominant isoform, which accounts for over 70% of total LAMP-2 protein in human and mouse CMs. To study the impacts of different LAMP-2 isoforms on Danon pathogenesis, we generated isoform-specific KO lines from an isogenic healthy control line. LAMP-2B KO, but not LAMP-2A or -2C KO hiPSC-CMs recapitulated the phenotypes we observed in Danon hiPSC-CMs, including defects in mitochondria function, energy production, and autophagy. Thus, our study provided strong evidence that the B isoform of LAMP-2 plays a causative role in the development of Danon pathogenesis.
A Novel Mechanism of LAMP-2B in Autophagosome-Lysosome Fusion in CMs
Extensive studies have established a key role of LAMP-2A in CMA [57,58]. LAMP-2C has been implicated in novel types of autophagy responsible for RNA and DNA uptake and degradation [59,60]. Recently, LAMP-2C has also been shown to function as a negative regulator of CMA in the immune system [56]. Unlike the other two isoforms, LAMP-2B's role in autophagy remains elusive. Using an autophagic flux assay, we found that only LAMP-2B KO hiPSC-CMs displayed an autophagy defect, which was not observed in LAMP-2A or LAMP-2C KO hiPSC-CMs. Microtubule-associated protein light chain 3 (LC3)-II has been previously shown to mark mature autophagosomes [61]. In our system, LAMP-2B deficiency caused the accumulation of LC3-II-positive autophagosomes, which is due to a dramatically impaired fusion between autophagosomes and lysosomes. Studies using non-CM cells, such as HEK293 cells and mouse embryonic fibroblasts (MEFs), reveal that Syntaxin 17 (STX17) is required for autophagosome-lysosome fusion through its interactions with synaptosome-associated protein 29 (SNAP29) and vesicle-associated membrane protein 8 (VAMP8). Autophagy related 14 (ATG14) enhances autophagosome-lysosome fusion by interacting with the STX17-SNAP29 complex [62,63]. However, knockout of STX17 in hiPSC-CMs did not cause significant accumulation of autophagosomes, suggesting an STX17-independent autophagosome-lysosome fusion mechanism in CMs.
To uncover the molecular role of LAMP-2B in autophagosome-lysosome fusion, we took advantage of 2 cell lines with low (HEK293) or high (hiPSC-CMs) levels of LAMP-2B expression. Ectopic overexpression of LAMP-2B in HEK293 cells significantly enhanced the interaction between ATG14 (on autophagosomes) and VAMP8 (on lysosomes). To our surprise, overexpression of LAMP-2B also suppressed the autophagosome-lysosome fusion defect induced by STX17 knockdown, which indicates that LAMP-2B functions independent of STX17. We further verified these findings in CMs that express high levels of LAMP-2B. ATG14 and VAMP8 formed a complex with endogenous LAMP-2B. This complex was disrupted in Danon or LAMP-2B KO hiPSC-CMs. In conclusion, our findings established a molecular mechanism by which the loss of the lysosomal-membrane protein LAMP-2B disrupts autophagosome-lysosome fusion in Danon disease. Therefore, targeting LAMP-2B represents a potential therapeutic avenue to restore defective autophagy in Danon cardiomyopathy. Moreover, LAMP-2B-deficient hiPSC-CMs could also serve as an in vitro model of Danon disease for drug screening and discovery to identify novel therapeutic agents that restore autophagosome-lysosome fusion.
Gene Therapy to Treat LSDs
The development of lysosomal storage disorders often occurs from mutations in a single gene. As mentioned, Pompe disease, Danon disease, Fabry disease, and MPS type I are caused by mutations in the GAA [8], LAMP-2 [9,10], α-galactosidase A (a-Gal A) [11], and the α-L-iduronidase (IDUA) genes [12], respectively. Gene therapy is a promising therapeutic intervention strategy that utilizes viral infection to replace mutated, pathogenic genes with wildtype copies to ameliorate or cure disease. While the concept of gene therapy originated in the 1960s and the first controversial human study was performed in 1980 led by Martin Cline, the safety of gene therapy has been extensively debated since its conception [64,65].
The first clinical trials for gene therapy took place in the 1991 using retroviral delivery to infect patient-derived peripheral blood cells. However, despite X-linked severe combined immunodeficiency being successfully treated by retroviral gene therapy in pediatric patients in 2000, it was later discovered that retroviral insertion into these patients' genomes directly led to some of these patients developing leukemia [65]. Additionally, delivery of the ornithine carbamoyltransferase gene by adenovirus triggered a severe, ultimately fatal immune reaction in one patient in 1999 [66].
Adeno-associated virus (AAV) has also been used extensively to deliver gene cargo to patients and exhibits low immune response and poor genomic integration into the host genome, suggesting better safety and tolerability in patients compared to retroviral and adenoviral delivery systems [67]. There are 13 serotypes of AAV, which each exhibit preferential binding to cell surface receptors and glycans, thereby altering tissue tropism and allowing for somewhat selective infection of certain organs [68]. Currently, AAV gene therapy safety and efficacy is being tested in over 150 clinical trials for various diseases, including Pompe disease (NCT02240407), Fabry disease (NCT04046224, NCT04040049), MPS types I-III (NCT02702115, NCT03566043, NCT04088734, NCT03612869), and Danon disease (NCT03882437).
While the Food and Drug Administration (FDA) has already approved several AAV therapies [69], safety concerns of gene therapy still exist, especially for the treatment of chronic diseases like lysosomal storage disorders. For example, Danon cardiomyopathy represents a very severe form of cardiomyopathy that has been shown to result from impaired autophagy. Currently, there is no cure for this disease. A new clinical trial (NCT03882437) was recently started to assess the safety and toxicity of LAMP-2B gene therapy delivered by adeno-associated virus serotype 9 (AAV9) in male patients with Danon disease. These patients may need continuous injections over their lifetimes due to the limited duration of AAV-driven overexpression. Furthermore, the degree of transgene overexpression must be finely controlled to avoid toxicity. Moreover, AAV infection has been shown to cause hepatic toxicity and hepatocellular carcinoma in mice [70,71]. While tumor incidence has not been associated with AAV treatment in humans thus far (with >7 year follow up from certain trials) [72], the long-term safety of AAV therapies is still largely unknown. Despite these concerns, gene therapy remains a promising strategy to treat genetic diseases and will undoubtedly continue to improve in the coming decades.
Perspectives
LSD is an important category of diseases caused by lysosome dysfunction. Current therapeutic strategies targeting LSDs have mainly focused on restoring the activity of defective lysosome function thus far. These strategies include haematopoietic stem cell transplantation, enzyme replacement therapy, substrate reduction, and chaperone therapies [73]. However, these strategies have limitations, including the difficulty of targeting the therapies to the required sites in the body. Furthermore, treatment is usually only initiated after organ damage has already occurred. Therefore, the effectiveness of these therapies still requires further evaluation.
Autophagy, one of the key cellular clearance mechanisms tightly linked to normal lysosome function, plays a critical role in the cardiovascular system. Cardiovascular disorders have been shown to be both positively and negatively influenced by autophagy. Therefore, autophagy represents a promising therapeutic avenue to treat cardiovascular diseases. However, further investigation is required to delineate the specific contexts in which autophagy is salutary or deleterious. Establishing detailed molecular mechanisms into how autophagy affects cardiovascular disease progression is crucial in developing novel therapeutics.
Mechanistic insights into Danon pathogenesis gained from using isoform-specific KO cell lines informed us that the hiPSC-based disease-modeling system holds immense value for future translational studies. By using CRISPR/Cas9 genome editing technology, we generated an isogenic, mutation-corrected hiPSC line from one of the hiPSC lines derived from Danon patients. CMs derived from the mutation-corrected hiPSC line exhibited restored mitochondrial function and autophagic flux as well as significantly improved contractile function compared to the parental Danon line. These data provide a valuable foundation for gene correction-based therapy in patients with Danon disease. To date, 68 different LAMP-2 mutations have been observed in Danon patients [30]. Out of the 68, 64 are point or small insertion/deletion mutations that could potentially serve as targets for CRISPR/Cas9 mediated gene correction.
In addition to CMs, LAMP-2B expression was also highly enriched in the brain, skeletal muscle, and retinal pigment epithelium [74,75]. Consistent with this notion, Danon patients also exhibited mental retardation, skeletal muscle weakness, and vision impairment. Given the high energy consumption of these three organs, a similar LAMP-2B-mediated mechanism of autophagosome/lysosome fusion may exist in these tissues in addition to the heart. Therefore, studies using cells generated from LAMP-2B KO hiPSCs could provide novel insights into the pathogenesis of Danon-derived mental disorder, muscle weakness, and visual defects.
Conclusions
In this review, we summarized our current view of the relationship between lysosome function and cardiovascular diseases. Evidence provided by recent studies has greatly increased our understanding of how dysregulation of lysosomal pathways in the cardiovascular system contribute to disease in a highly context-dependent manner. Using Danon disease, a disease caused by impaired autophagosome/lysosome fusion and which displays very severe cardiomyopathy phenotypes, as an example, we further discussed how the new insights into the molecular mechanisms of disease pathogenesis could impact the development of novel therapeutic strategies. Fine tuning the activity of lysosomal pathways will be a great challenge due to the dual role of lysosomes in cardiovascular diseases. However, there is no doubt that lysosomal function significantly contributes to disease pathogenesis and therefore represents an important target of future therapeutics.
Author Contributions: C.C., A.S.R., and K.S. wrote the manuscript. All authors have read and agree to the published version of the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,899.6 | 2020-01-27T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Sensitivity analysis of chaotic systems using a frequency-domain shadowing approach
We present a frequency-domain method for computing the sensitivities of time-averaged quantities of chaotic systems with respect to input parameters. Such sensitivities cannot be computed by conventional adjoint analysis tools, because the presence of positive Lyapunov exponents leads to exponential growth of the adjoint variables. The proposed method is based on the least-square shadowing (LSS) approach [1], that formulates the evaluation of sensitivities as an optimisation problem, thereby avoiding the exponential growth of the solution. However, all existing formulations of LSS (and its variants) are in the time domain and the computational cost scales with the number of positive Lyapunov exponents. In the present paper, we reformulate the LSS method in the Fourier space using harmonic balancing. The new method is tested on the Kuramoto-Sivashinski system and the results match with those obtained using the standard time-domain formulation. Although the cost of the direct solution is independent of the number of positive Lyapunov exponents, storage and computing requirements grow rapidly with the size of the system. To mitigate these requirements, we propose a resolvent-based iterative approach that needs much less storage. Application to the Kuramoto-Sivashinski system gave accurate results with very low computational cost. The method is applicable to large systems and paves the way for application of the resolvent-based shadowing approach to turbulent flows. Further work is needed to assess its performance and scalability.
Introduction
Optimisation of engineering devices is based of the definition of an objective function, usually a time-average quantity J(s), and the evaluation of the problem parameters, s, that minimise or maximise this function, depending on the application. During the optimisation process, the gradient of the objective 5 function with respect to the parameters dJ(s)/ds (also known as sensitivity) is usually required. This is obtained by solving the tangent equation (or the adjoint equations for multiple parameters). In either case, the equations are obtained by linearising the governing non-linear set describing the system around the solution obtained for the reference values of the parameters, s. The tan-10 gent (or adjoint) equations are then integrated forward (or backward) in time respectively to obtain the desired sensitivities.
The aforementioned approach works very well when the governing set of equations describing the system is steady, in which case the solution is a point in phase space. When the evolution is however unsteady, and in particular 15 when the system exhibits chaotic behaviour, this process fails. The reason is that chaotic systems have one or more positive Lyapunov exponents (PLEs), thus two solution trajectories starting from the same initial conditions and evaluated at s and s + δs deviate from each other, leading to exponentially growing sensitivities, as explained in [2]. For this reason for example, model predictive 20 control algorithms for transitional or turbulent flows employ the receding horizon approach, whereby the optimisation is performed over a receding window of finite time [3,4], thus sensitivities remain bounded and reliable.
Several approaches that can compute useful sensitivities in chaotic systems have been proposed. These are based on ensemble schemes [5], the fluctuation 25 dissipation theorem [6], the Fokker-Planck equation [7], cumulant expansions [8], or unsteady periodic orbits [9]. One of the most promising approaches is Least Squares Shadowing (LSS) [1,10,11], which is based on the shadowing lemma [12,13]. For uniformly hyperbolic systems, this lemma guarantees the existence of a solution trajectory evaluated at s + δs that shadows, i.e. remains close to, the reference trajectory evaluated at s. This regularises the problem, avoids the exponential growth, and results in meaningful sensitivities.
This lemma is also central in establishing trust into the statistics of numerical solutions of chaotic systems. Due to round off errors, a computed trajectory will deviate from the true trajectory of the system (starting from the same initial 35 condition). The shadowing lemma guarantees the existence of a true trajectory (with different initial condition) that will shadow the numerical one [14, 15,16], thus the statistics of the computed solution can be trusted. Recent work however [17] has shown that the shadowing trajectories may not be physical, thus casting doubt on this central premise. Several chaotic systems (one dimensional 40 perturbed tent maps) were examined; the shadowing trajectories were found to be physical in one system and non-physical in others. This finding raises impor- 19,20] and the non-intrusive Least Squares Shadowing (NILSS), [21,22]. Both methods can be applied to large systems, but their computational cost scales with the number of PLEs. This limitation restricts the application 50 to systems with relatively small or moderate number of PLE's. For example, NILSS has been applied successfully to 2D flow (backward facing step with 14 PLEs [21]) and to 3D flows (minimal channel flow unit at Re τ = 140 with 150-160 PLEs [23], and flow around a cylinder at Re=525 with less than 30 PLEs [24]). 55 The number of PLEs and how it changes with the system parameters is therefore of critical importance for the application of the method. In the area of turbulent flows, the Lyapunov spectrum has been computed for low-Reynolds channel [25] and weakly chaotic Taylor-Couette flows [26]. A more recent study [27] has investigated the variation of the spectrum with Reynolds number for 60 forced homogeneous isotropic turbulence (HIT). The Reynolds numbers exam-ined (based on the Taylor microscale) were Re λ = 15.5, 21.3, 25.6 (note that these values are considered very small for engineering applications). The number of PLEs were found to be about 25, 60 and 100 respectively (see figure 2 of [27]). A fourth value of Re λ = 37.7 was also considered, but it was not possible to find the number of PLEs because of the slow decay rate of the spectrum.
The latter was found to follow a power-law, λ i ≈ α(i − 1) β + λ 1 , with the exponent β in the region 0.81-0.85 (with the smaller value for the higher Re τ ). The maximum LE, λ 1 , is expected to scale with the inverse of the Kolmogorov time scale, τ η , see theoretical arguments in [28]. This was tested in [29] for HIT and 70 it was found that λ 1 τ η is not constant, but instead grows with Re λ following a power-law, λ 1 τ η ∼ Re γ λ . The above scaling of λ 1 and the fact that the decay rate of the Lyapunov spectrum decreases with Reynolds number means that the number of PLEs grows very rapidly as Reynolds increases. Thus, alternative approaches are 75 required to make LSS and its variants applicable to complex flows of engineering interest. One such approach relies on the understanding of the underlying physical processes. For example, it is well known that momentum transfer is dominated by large scale structures, thus smaller scales (that are responsible for the largest LEs) can be filtered out and their effect modelled, hopefully 80 without loss of accuracy in sensitivity. This is exactly what large eddy simulations (LES) are designed for [30]. In standard LES, the equations are filtered in space, but temporal filtering (with filter time scale ∆) is also possible [31].
In the limit of ∆ → ∞, the temporally-averaged LES (TLES) equations tend to the standard Reynolds-Averaged Navier-Stokes (RANS) equations (section 85 2.3 of [31]). We conjecture therefore that as ∆ increases, application of LSS to the TLES equations will recover the sensitivites predicted by the tangent (or adjoint) method applied to RANS. Thus, parameter ∆ bridges two limits, LSS applied to unfiltered Navier-Stokes (∆ = 0) and the RANS equations (∆ → ∞).
As ∆ increases, the number of PLEs decreases and the problem becomes better 90 conditioned. However, accuracy is traded for computational efficiency, because as ∆ increases the effect of more scales needs to be modelled.
Although the above approach mitigates the rapid growth of the computational cost of LSS (and also provides a useful conceptual framework that bridges two limits), it relies on accurate modelling of the filtered scales. In the present 95 paper, we follow a different approach. All existing formulations of LSS and its variants have been derived in the time domain. If however the LSS is formulated in the frequency domain, the exponential separation of the trajectories for s and s + δs does not appear explicitly. Of course time-and frequency-domain formulations are equivalent, but as will be seen, the latter formulation allows us to 100 gain deep physical insight and also is amenable to iterative solution algorithms that are not possible with the former. Frequency domain approaches have been applied from the perspective of linear [32] and non-linear input-output analysis [33], the frequency response of periodically time-varying base flows [34], or model-based design of transverse wall oscillations for turbulent drag reduction in 105 a channel flow [35]. Similarities and differences with existing frequency-domain approaches are discussed throughout the manuscript.
The paper is organised as follows. Section 2 sets scene and presents the standard LSS algorithm in the time domain. The formulation of the algorithm in the frequency domain is derived in section 3 followed by application to the 110 Kuramoto-Sivasinsky equation in section 4. A resolvent-based iterative algorithm to solve the resulting system is presented in section 5 and the results are further analysed in section 6. We conclude in section 7.
2. Sensitivity analysis of chaotic systems using the shadowing approach 115 Consider a dynamical system governed by a set of ordinary differential equa- where u(t; s) ∈ R Nu is the vector of state variables and s ∈ R Ns is the set of control parameters that define the dynamics of the system. System (1) can arise for example after spatial discretisation of a set of conservation laws that describe mathematically the problem under investigation. We assume that the vector field f (u; s) : R Nu × R Ns → R Nu varies smoothly with u and s.
120
In many applications we are interested in evaluating the sensitivity of a time-averaged quantity J(s) : R Ns → R, to the parameters s. For example, in the area of aerodynamics, J(s) can be the drag coefficient and s the set of variables that describe the shape of an airfoil.
The gradient of J(s) with respect to s is defined as In chaotic systems, the limit and differentiation operations do not commute, i.e.
where v(t) = du ds = lim δs→0 u(t; s + δs) − u(t; s) δs is the sensitivity of the solution u(t; s) to a change δs of s. The reason is that chaotic systems have one or more PLEs. This means that the distance in phase space between u(t; s + δs) and u(t; s), i.e. the Euclidean norm of the vector u(t; s + δs) − u(t; s) that appears in the nominator of (5), grows exponentially at rate ∼ e λ1t , where λ 1 is the maximum of these exponents [2,7,36]. Thus, the 125 quantity 1 T T 0 ∂J ∂u v + ∂J ∂s that appears on the right hand side of (4) diverges as T → ∞. On the other hand, assuming that J(s) varies smoothly with s, the sensitivity dJ ds is finite. If the dynamical system (1) is uniformly hyperbolic, the shadowing lemma [37] guarantees the existence of a solution trajectory evaluated at s + δs that remains always close, i.e. shadows indefinitely, the reference trajectory u(t; s).
We denote this shadowing trajectory as u(τ (t); s + δs), where τ (t) is an appropriate time transformation. The LSS method, proposed in [1], computes the shadowing trajectory by minimising the distance between u(τ (t); s + δs) and u(t; s) in a least squares sense, i.e. min u,τ where α 2 is a constant parameter. Taking the limit δs → 0, leads to the following linear minimisation problem, where v(t) = d ds u(τ (t); s) (8a) The second term within the cost functions (6a) and (7a) penalises the deviation of τ (t) from t. A high value of α 2 results in a small deviation (heavy penalisation), while a small value to light penalisation. The solution of (7) for α 2 = 0, leads to the orthogonality condition between the vectors f (u; s) and v(u; s) at each point along the trajectory, i.e. f (u; s)v(u; s) = 0, a constraint from which η(t) can be obtained [18]. Thus, problem (7) becomes From the solution v lss (t) and η lss (t) of (9), the sensitivity dJ ds can be easily computed [1].
Formulation of the shadowing algorithm in Fourier space
As mentioned in the Introduction, all existing methods solve the minimisation problem (9) in the time domain. In this section, we formulate the problem in the frequency domain, i.e. in Fourier space, and seek a solution that remains bounded.
135
To this end, we consider a reference trajectory u(t; s) of length T and assume that the solution of the minimisation problem (9) is periodic with period T .
Thus, it can be written in terms of Fourier series as, wherev k ,η k denote the Fourier coefficients, ω 0 = 2π T is the fundamental angular frequency and the index k characterises the harmonics with frequencies ω k = kω 0 . We assume similar series expansions for the Jacobian ∂f ∂u (t) and f (t), The matrix-vector product ∂f ∂u (t)v(t) can be written as where which is the convolution sum between the Fourier coefficients of ∂f ∂u (t) and v(t). Similarly, the left hand side of the orthogonality condition (9c) can be expanded where and the notation () ⊤ denotes the transpose operation. In the above two expressions, we have assumed that the weighting matrix associated with the inner product is the identity matrix, but the analysis below can be easily generalised to an inner product defined as f (u; s)v(u; s) = f ⊤ Qv. Finally, In practise, the range of the index k is truncated to lie within the interval For example, if the reference trajectory is sampled every ∆t, q = T 2∆t −1. The frequency spectrum of u(t; s) can also indicate the number of spectral coefficients that must be retained. A finite q amounts to applying a sharp spectral cut-off filter to the above expansions, where all coefficients with |k|, |l| 140 or |k − l| > q are set equal to 0.
Introducing the finite spectral representations to (9b) and (9c), yields the following block set of equations for the k-th pair of coefficientsv k ,η k , where and I Nu is the identity matrix of dimension N u . Each block consists of N u + 1 equations and there in total 2q + 1 blocks, resulting in (N u + 1) × (2q + 1) equations and unknowns.
Due to the sharp spectral cut-off filter mentioned earlier, the starting and final values of index l in the summation is slightly modified when k = 0. For example, for k = −q equation (17) becomes while for k = +q, Stacking the blocks together one below the other results in a linear system of equations that takes the form with the same blocks in each diagonal.
145
System (22) can be also written in expanded matrix form as The above matrix, known known as Hill matrix [38,39], contains square blocks with dimensions (N u + 1) × (N u + 1), and thus has very large storage requirements. The solution of system (22) can be written symbolically as where is the matrix that maps the input R to the output V. As q → ∞, H becomes an operator, termed here shadowing harmonic operator. Note that equations (9b) and (9c) form a linear, time-varying periodic system, Thus, the shadowing harmonic operator is identical to the standard harmonic operator, defined in [39], applied to the above system. The properties of this operator will be examined in the next section for the Kuramoto-Sivashinsky equation.
The sensitivity dJ ds is computed from [1], which in the frequency domain can be written as, In the above formulation, the reference trajectory u(t; s) was expanded using the same number of Fourier modes as the solution, 2q + 1, and each diagonal of the block Toeplitz matrix T (T m ) corresponds to one harmonic of u(t; s). This is however not necessary. For example, if the spectral content of u(t; s) is concentrated in a few frequencies, then only the relevant diagonals of T (T m ) need to be retained. This can lead to enormous savings in the storage requirements and solution time of system (24). In the limiting case, where only the timeaverage of u(t; s) is retained, the equations decouple and the k-th component In this case, the harmonic balancing method becomes identical to the standard 150 resolvent analysis [32].
Some comments are warranted here to clarify an underlying assumption of the above formulation. Suppose that the reference trajectory u(t; s), and thus ∂f ∂u (t) and f (t), are exactly periodic with period T and the minimisation problem (9) is solved in the time domain using the multiple shooting shadowing 155 method, [18,19]. If the trajectory is sampled at points t 0 . will not necessarily yield a periodic solution, i.e. v(t 0 ) will not necessarily be equal to v(t K ).
In the above formulation, we have not explicitly considered the minimisation of the cost function (9a); instead we closed the system assuming periodicity, i.e. v(t 0 ) = v(t K ) and η(t 0 ) = η(t K ). The same assumption is made in the 165 periodic shadowing method of Lasagna [40], and leads to a sensitivity error that initially decays at a rate 1/T , followed by the asymptotic rate 1/ √ T (the latter dictated by the central limit theorem). There is however an important difference compared to the present method: In [40], the time transformation τ (t) is linear with respect to t, leading to a constant η(t). In the present method, 170 we do not prescribe any form of τ (t), so η(t) is unknown and is obtained by imposing the orthogonality constraint (9c) at every point along the trajectory.
Another difference is that our method is formulated in the frequency domain instead of the time domain. As will be seen later, this can lead to significant simplifications, and allows one to obtain deep physical insight on the dominant factors that determine the sensitivity, which is not possible in the time domain.
An approach closely related to the present one was proposed recently by which is similar to (9b). The authors restrict the harmonic operator to a subspace that is orthogonal to the direction of the phase shift given byf k . This is achieved by projecting out ofĝ ′ k (the Fourier coefficients of the forcing g ′ (t)) the component that would lead to a non-zero projection of the solutionq ′ k tô f k . In the present paper, we seek the sensitivity with respect to s, thus the 185 forcing vector g ′ (t) takes the particular form g ′ (t) = ∂f ∂s (t). This vector is subsequently modified by adding η(t)f (t), where the time dilation η(t) is computed so that the orthogonality constraint is satisfied at all time instants, as already mentioned.
In the above frequency-domain formulation, we seek a harmonic solution that 190 remains bounded, see expansions (10), and does nor suffer from exponentially growing terms. There is a price to pay however; the frequencies are all coupled together leading to large storage requirements for the Hill matrix, see (24).
The properties of the shadowing harmonic operator will be examined in the next section for the Kuramoto-Sivashinsky equation. This test case is small 195 enough that the Hill matrix can be stored and the linear system (24) is solved directly with LU decomposition. In section 5 we propose an iterative method that mitigates the storage and solution time requirements.
Application to the Kuramoto-Sivashinsky equation
We apply the method proposed in the previous section to the Kuramoto Sivashinsky (KS) equation, which displays complex spatio-temporal chaos and is frequently used in the literature as a test case for chaotic systems [41]. The equation takes the form where c is an artificially introduced parameter [42]. The term ∂ 2 u ∂x 2 is responsible 200 for energy production, while ∂ 4 u ∂x 4 adds dissipation to the system. We set L = 128 to generate chaotic solutions [41] and discretise (31) with a second order finite difference scheme with δx = 1. For c = 0, the dynamical system has 16 positive Lyapunov exponents, the largest of which is λ 1 = 0.093 [42].
Two objective functions are considered, the space-time average of the state u(x, t; c) and of the total kinetic energy where an overbar () denotes time-average and a prime () ′ the fluctuation around 205 the average, i.e. u = u + u ′ . We seek their sensitivities with respect to c, i.e.
and As can be seen from figure 3, the sensitivities obtained using the shadowing harmonic operator match very well with those obtained using the preconditioned MSS [19]. A comparison with finite difference (FD) data is also presented. For dJ1 dc , there is a small bias, which has also been observed in the time-domain 230 formulation of the method [19,20,42]; for dJ2 dc the matching with FD is very good. T . The difference initially decays at a rate 1 T (similarly to [40]) and then at Thus for larger systems, both storage and solution costs grow fast.
Below we investigate in more detail the properties of the shadowing harmonic operator, while in the next section 5 we explore an approach that can mitigate the aforementioned rapid growth of computational cost and storage requirements for larger systems.
Singular value decomposition of the shadowing harmonic operator
The singular values σ i of the shadowing resolvent matrix H, defined in (26), are obtained from the solution of the eigenvalue problem, The solution maximises the system gain, defined as the ratio of the (squared) 2-norm of the output (response V) to that of the input (forcing R), i.e.
where V 2 2 = V * V and R 2 2 = R * R. Since δx = 1, the norms represent the discrete values of the integrals over the domain, for example turbulent flow around a cylinder see [24]. Note also that as T grows, apart from the largest singular values, the rest start to converge. This indicates that they represent the true behaviour of the system, i.e. they are physically meaningful. i.e. Φ 1 , Φ 2 and Φ 3 , are shown in panels (b)-(d) respectively. The distributions are more difficult to interpret physically, but note the significant differences with 295 respect to the actual forcing. It is interesting to note for example that they do not exhibit the wavy pattern of df ds (x, t), instead they are relatively smooth but with some local peaks and valleys.
Using the r largest singular values of H, an approximate solution of system (24) can be written as where RΦ i = Φ * i R is the projection of the right hand side R onto the optimal forcing Φ i . For r = (2q + 1)(N u + 1) the approximation is exact. Using V (r) , can be computed from (34) and (35) respectively; these are plotted as functions of r in figure 9. It can be seen that a relatively large value of r is required to obtain an accurate result. Although the first few singular values σ i are large, the component of R along the Φ i direction is weak, thus a substantial number of terms are required to obtain the correct 305 sensitivity. Bearing in mind that R k = d f k ds , 0 ⊤ , this is related to the different patterns between df ds (x, t) and optimal forcings (at least for the first 3 modes) as shown in the previous figure 8.
The linear system (24) has large storage requirements and is time consuming to solve. Below we propose an approach that can mitigate these requirements.
A resolvent-based iterative method for the shadowing direction
Instead of solving directly system (24), an iterative method can be devised, where only a few diagonals are retained in the left hand side and treated implicitly, and the rest are moved to the right hand side and updated at every iteration. For example, by retaining only the blocks of the main diagonal, the Hill matrix becomes diagonal and the blocks decouple. In this case, the iterative method takes the from where m is the iteration number, andĝ k ,ĥ k denote the explicitly treated terms.
It is instructive to derive the form of vectors g(t) and h(t) in the time domain.
To this end, using Reynolds decomposition, and substituting in (9b) and (9c) we get Taking the time-average we obtain and subtracting the two sets, we get After some rearrangement, Thus we get and taking the Fourier transform leads to (39) for k = 0. Similarly, the timeaverage system (42) can be written as from which we obtain the form that corresponds to k = 0 for system (39).
As can be seen from (44a), g ′ (t) consists of two groups of three terms; the first group involves the fluctuating Jacobian and sensitivity v ′ , and the second the fluctuating η ′ and f ′ . It is possible to get a simplified system by assuming that η is constant (for the physical interpretation see [40]). In this case we get This system requires an additional constraint to obtain the constant η. To this end, we require that f (t) and v(t) are perpendicular in a time-average sense, because f (t) is a real variable. Note that this condition couples together all Fourier components. Taking the Fourier transform of (48) we can form the following iterative method In the first iteration m = 1, we setg (0) k = 0 and we get These two equations can be combined together to obtain η (1) . Denoting the standard resolvent operator as solving forv (1) k and substituting in (51b) we get from which we obtain where Thus the solution of two linear systems that involve the standard resolvent operator, R(kω 0 ), is required. Due to the linearity of (50a), v (1) Applying inverse Fourier transform tov (1) k yields v (1) (t), from whichg(t) can be obtained from (48), and Fourier transformed to findg (1) k . The right hand side of (50a) can then be assembled and the second iteration performed. Note that 315 system (56b) does change with m, thus µ k is computed once. In this process, the key variables λ k , η, and µ k required for the evaluation of the sensitivitŷ v k (also known as shadowing direction) are obtained with the aid of the Resolvent operator R(kω 0 ). This is therefore a resolvent-based iterative method for computing the shadowing direction, called Resolvent-based Shadowing (RbS).
The most efficient approach is to perform LU decomposition of R(kω 0 ) once at the start of the sensitivity analysis and solve systems (56) with forward and backward substitution for each k. Note that the decomposition (57) ofv k into a linear combination of λ k and µ k is valid for all iterations, and of course the final converged solution.
325
In (48a), the term ∂f ′ ∂u v was considered as part ofg ′ (t) and treated explicitly. This is not necessary, but it simplifies the algebra. A better approach would be to obtain v from (48b) and substitute in (48a); this would lead to a form very similar to (51a), albeit more complex. The process to extract η remains the same. Such an approach would couple better the time-average and the very efficiently using existing linear algebra packages, such as MUMPS [43], that exploit the sparse structure of the Jacobian. For three-dimensional inhomogeneous flows this is more challenging, but doable (at least for moderate-size systems).
The proposed method was applied to compute the sensitivities of the objective functions J 1 (c) and J 2 (c), defined in (32) and (33), for the Kuramoto-Sivashinsky equation. Since η is constant, equations (34) and (35) are simplified and In terms of computational cost, it took approximately 0.15s of CPU time to evaluate these sensitivities; this is about two orders of magnitude faster compared to preconditioned MSS and one order faster with respect to the shadow-ing harmonic approach. Again, we stress that these results are case dependent.
More research is need to investigate the performance of the algorithm in other flow cases.
In the next section, we explore the properties of the standard resolvent operator R(kω 0 ). For angular frequencies ω > 2 π 0.3 ≈ 1.9 there is no coupling with the timeaverage Jacobian, leading to σ(ω) ≈ 1/ω (indicated by a black dashed line); again this is consistent with the frequency spectra. Substituting expression (57) into (58) we get The solutions λ 0 and µ 0 of the linear systems (56a) and (56b) can be written in terms of the optimal forcings and responses evaluated for ω = 0 as where r is the number of retained singular values. This expression is analogous to equation (38) presented earlier for the harmonic balance method. Substitut-ing in (60) we get The sensitivity can therefore be written as the weighted sum of the spatial averages of the optimal responses, Ψ i .
385
Similarly for dJ2 dc , we get from (59), where Expression (63) is useful because it allow us to find the contribution of each frequency on the sensitivity; we investigate this in figure 13a. More specifically, we compute dJ2 dc using frequencies in the interval [−f, f ], where f = ω/(2π), and we plot the result against f . Note the convergence of dJ2 dc to the value predicted by the harmonic resolvent as f increases, i.e. the range of k in (63) expands.
390
It can be seen that only the frequency range [0.01, 0.07] contributes, which is consistent with the spectra shown in figure 2. Outside this range, the spectral content is small, and does not contribute to the sensitivity. Note that this is the result of a single realisation; no averaging over initial conditions has been performed to obtain this plot. Hence, although the maximum singular value is significantly larger compared with the rest as evidenced in 11, keeping just one contribution will not provide 400 accurate results. In order to explain this behaviour for dJ1 dc , the optimal forcing corresponding to σ max at ω = 0 is plotted together with the true forcing ∂f | 7,197.4 | 2022-04-21T00:00:00.000 | [
"Physics",
"Engineering"
] |
Thermophoretic torque in colloidal particles with mass asymmetry
We investigate the response of anisotropic colloids suspended in a fluid under a thermal field. Using nonequilibrium molecular dynamics computer simulations and nonequilibrium thermodynamics theory, we show that an anisotropic mass distribution inside the colloid rectifies the rotational Brownian motion and the colloids experience transient torques that orient the colloid along the direction of the thermal field. This physical effect gives rise to distinctive changes in the dependence of the Soret coefficient with colloid mass, which features a maximum, unlike the monotonic increase of the thermophoretic force with mass observed in homogeneous colloids.
I. INTRODUCTION
Thermal gradients induce mass and charge coupling effects that can potentially be employed to construct devices for energy conversion applications. Since the early observations of thermodiffusion by Ludwig and Soret [1,2] in the 19th century, other coupling effects have been reported over the years. Lehmann showed, shortly after the discovery of liquid crystals (LCs), that cholesteric LCs adopt a uniform rotation as a response to thermal gradients [3]. Peltier and Seebeck demonstrated the coupling between electric currents and thermal gradients [4], which is the basis of thermoelectrics [5]. More recently, it was shown that thermal gradients induce the polarization of liquid water (thermal polarization, TP) and thermal orientation (TO) in molecular fluids of anisotropic particles [6][7][8]. These works highlighted the importance of particle anisotropy in molecules and colloids as an important variable driving thermal orientation effects.
The response of axially symmetric particles, e.g., spherocylinders, to thermal gradients was investigated in Refs. [9,10]. These works used kinetic theory to study particles suspended in a gas (aerosols) under thermal gradients. It was found that the thermophoretic drift is anisotropic. This observation is relevant in anisotropic particles [11], since the anistropy could influence deposition processes in suspensions. More recently, the thermophoretic drift of spherocylinders has been investigated using hydrodynamic computer simulations [12]. These authors reported anisotropic thermophoresis in fluid suspensions as well, and concluded that the anisotropy does not induce particle orientation in a homogeneous thermal field. We discuss in this work how inhomogeneous mass distribution inside a colloid can impart an orientation in such homogeneous fields.
The manipulation of suspensions of anisotropic particles with thermal gradients is attracting significant attention and motivating new experiments. The thermodiffusive behavior of *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>‡<EMAIL_ADDRESS>colloidal rods (fd viruses) was studied recently [13], and it has been shown that the heating of metallic nanorods in a polarized optical trap induces a thermal gradient around the rod and a torque that can be significant (∼10 2 pN nm) [14]. Theoretical and experimental studies of anisotropic particles under thermal gradients have uncovered fascinating behaviors that could open new routes to manipulate small colloids using thermal fields. Understanding the behavior of these particles under nonequilibrium conditions is of fundamental interest to explain and predict coupling effects out of equilibrium.
In this work we investigate the response of small anisotropic colloids dispersed in a dense fluid, which is subjected to a thermal gradient. We will analyze the thermal coupling effects arising from the colloid anisotropy, when the colloids feature an inhomogeneous mass distribution. We will demonstrate that mass anisotropy, which is a general feature of molecular assemblies and a property that can be tuned in colloids with heterogeneous composition, does couple with the imposed thermal fields. The coupling leads to a transient thermophoretic torque and the colloids adopt in the stationary state a preferred orientation. We also show that this coupling influences and modulates the Soret coefficient of the anisotropic colloids.
The systems we are interested in, colloidal suspensions of anisotropic particles with asymmetry in the mass distribution, involve high fluid densities and short mean free paths, the latter being smaller than the characteristic particle size. Further, frictional effects, hydrodynamic interactions, thermal fluctuations, and the fluid structuring around the particle become important in describing the thermophoretic behavior of the suspension. These issues limit the applicability of kinetic theory. We therefore used nonequilibrium atomistic molecular dynamics to perform our investigation. We also use nonequilibrium thermodynamics theory [4,8] to derive phenomenological equations that describe the physical effects discussed herein.
II. METHODOLOGY
We performed simulations of a single colloid suspended in an atomic fluid at a density characteristic of a liquid. The fluid was modeled using the WCA potential [15] at an average density of ρ = (N/V )σ 3 = 1.0, where N is the number of solvent particles, V = L x × L y × L z is the volume of the simulation cell, and σ is the diameter of the solvent particles. The colloid was modeled as a rigid chain of tangent spheres using a collection of beads of diameter, σ . We considered both symmetric and asymmetric mass distributions. The latter was achieved by changing the mass of one bead at the end of the colloid, such that the total mass is given by m total = (N − 1)m + m end . The mass, m, of some of the beads is equal to the mass of the solvent particles, m s , and it is defined as m = m s = 1 hereafter. For the symmetric case we used m total = (N − 2)m + 2 × m end . The degree of mass asymmetry can be quantified by introducing a new quantity, the where r g is the geometric center of the colloid, m i and r i are the mass and position of the beads in the colloid along the colloid axis, and N c is the number of beads in the colloid. For the model considered in this work the dipole is defined as with γ = m end /m and γ = 1 (d = 0) for the asymmetric and symmetric cases, respectively. We show in Fig. 1 a typical simulation cell illustrating the position of the thermostats employed to set up the thermal gradient (see also Supplemental Material [16]). The center of mass of the colloids was tethered using a harmonic potential (see below) to the geometric center, r c , of the left and right containers in the simulation cell (see Fig. 1), while the colloids rotate freely around their center of mass. Colloids of different lengths and different values of the γ parameter defined above are shown in Fig. 1.
The WCA potential was used to model the interparticle interactions, where represents the interaction strength and σ is the diameter of the solvent particles. The interaction parameters were = σ = 1 for all colloid-fluid and fluid-fluid interactions, while the atoms within the nanoparticle did not interact with each other. We use σ and to define the usual Lennard-Jones units. 16000 particles were initially placed in a simulation box of size (40,20,20)σ , for an average reduced density ρ = 1.0 and reduced temperature T = 2.5. Two colloidal particles were placed at positions (10,10,10)σ and (30,10,10)σ and random orientations. Each colloid was treated as a rigid body. The harmonic potential to restrain the translational motion is defined as V (r) = k(r − r c ) 2 , where k is the force constant and r c is the position of the reference point at the beginning of the simulation, corresponding to the geometric center between the hot and cold thermostats (see Fig. 1). The force constant was set to kσ 2 / = 10 3 . A thermal gradient and a heat flux were imposed on the system by using boundary driven nonequilibrium molecular dynamics (BD-NEMD) simulations, using a time step of δt = 0.0025. All the trajectories were integrated using LAMMPS [17]. Thermostatting regions with dimension (4,10,10)σ were set up at the edges and in the center of the prismatic box (see Fig. 1). The motion of the fluid atoms was integrated using the velocity Verlet method, and the translational and rotational degrees of freedom of the colloids were integrated using the method of quaternions for rigid bodies. The system was equilibrated at the desired average temperature of the NEMD run by using the Nosé-Hoover thermostat. We then applied the thermal gradient by thermostatting the hot and cold regions at the desired temperatures. The evolution of the temperature gradient through the system was monitored periodically over a simulation of 10 6 steps. The gradient reaches the stationary profile before this period. Production simulations were performed under the same conditions for an additional 10 8 time steps and the orientation vector, n, of each colloid was sampled every 100th step.
The thermophoretic force on each colloid is computed as a function of the thermal gradient, by sampling the average displacement of the center of mass of the colloid and multiplying by the force constant of the applied harmonic potential. The total torque is referred to the center of mass of the colloid, computed for symmetric and asymmetric colloids as a function of the orientation of the colloid, but without integrating the rotational degrees of freedom. The systems were equilibrated for 10 6 steps, and the torque was averaged every 100 steps for 10 7 steps and 30 different runs.
III. RESULTS
Thermal gradients induce a thermophoretic force (TF), F T , on the colloids, and the latter drift either to the hot (thermophilic) or cold (thermophobic) regions. The TF can be calculated by monitoring the force on the colloids exerted by the solvent. The TF can be related to the Soret coefficient, S T , through [18,19], F s − F T = S T k B T ∇T , where T is the local temperature, ∇T is the thermal gradient, and F s and F T are the thermophoretic forces on the solvent and colloid, respectively. This approach has been successfully tested before to compute the Soret coefficient of colloids and binary mixtures [18,19]. In our case the thermophoretic force dominates the value of the Soret coefficient. The contribution of the solvent to the Soret coefficient, measured at ∇T = 0.12, is F s = −0.101 ± 0.003, which is of the order of k B ∇T at infinite dilution [18]. The contribution to the Soret coefficient is 0.343 ± 0.004. Our thermophoretic forces change linearly with the magnitude of the thermal gradient and with the number of beads N (see Fig. 2). This result is consistent with the equation for F T and it shows that all the systems considered here are in the linear regime.
We show in Fig. 3 the simulation data for the Soret coefficient as a function of the type of colloid (symmetric and asymmetric) and its mass. In both cases the Soret coefficient is positive, meaning that the colloid is thermophobic and drifts towards the cold region. In the symmetric case the Soret coefficient increases monotonically with the total mass of the colloid and saturates at high masses. The dependence with mass and the saturation of the Soret coefficient can be modeled using the expression S T = C N (m n − m s )/(m n + m s ) + D N , where C N and D N vary linearly with the colloid mass (see Fig. 3) and m n and m s represent the masses of the colloid and the solvent, respectively. This relationship resembles the kinetic theory equation for mixtures of elastic spheres interacting through r −12 potentials [20], which is the potential we also employed here. The asymmetric mass distribution results in distinctive differences in the Soret coefficient, which does not change monotonically with the colloid mass. This deviation from the symmetric case is particularly clear for longer colloids (N = 11), for which the Soret coefficient features a maximum at small colloidal masses before converging to Soret coefficients that are clearly smaller than those obtained in the symmetric case. We note that this difference in thermophoretic response is driven by mass asymmetry only and we argue that this physical behavior is induced by the coupling of the inhomogeneous We discuss now how the coupling effect leading to the torque is supported theoretically by nonequilibrium thermodynamics [8]. The change in energy of the colloid due to the orientation is defined by where k E is a force constant that determines the change in energy associated to deviations from the preferred orientation of the colloid and n is the unit vector defining its orientation (see Fig. 1). (See Ref. [8] for an example of an orientational vector in a different system, corresponding to a molecular fluid.) The energy above [Eq. (3)] is equal to the rotational kinetic energy of the colloid, where ω is the angular velocity and I is the moment of inertia. The entropy production is defined by where ζ r is the rotational friction coefficient and L nn is a phenomenological coefficient that is defined below. The linear laws describing the fluxes are dn/dt = −L nn k E T −1 n and τ = −ζ r ω, where τ = Iω is the torque. Using Eq. (3) and Eq. (4), we get where I is the moment of inertia. The entropy production associated to the rotation of the colloid in the presence of the thermal gradient is and the corresponding linear flux-force relations, where J q is the heat flux, L αβ ≡ L βα for α = β is the cross phenomenological coefficient, and λ ≡ L qq /T 2 has the usual meaning of the thermal conductivity in the absence of coupling effects, i.e., L nq = L qn = 0. At the stationary state, dn/dt = 0, and the average orientation of the colloid is given by where cos(θ ) = u J q · n, and the orientation is given by the projection of the unit vector, n, along the unit vector defining the direction of the heat flux (thermal gradient), u J q . Equation (9) predicts a linear dependence of the stationary value of cos(θ ) with the thermal gradient, ∇T . We show in Fig. 4 that our simulation results follow the linear dependence with ∇T for colloids of different lengths, N . Further, the strength of the orientation for a given thermal gradient and temperature increases with the colloid length. This dependence can be rationalized in terms of the coefficient L nn = ζ r /I , which can be calculated for the shish-kebab model investigated here (see the Appendix). Indeed, L nn decreases as the length of the colloid increases, contributing to an increase of the orientation [see Eq. (9)]. The definition of L nn implies that the orientation increases with the mass asymmetry, i.e., the mass dipole. This prediction agrees with our computations [see Fig. 4(b)]. We have used our simulation results and Eq. (9) to quantify the dependence of L nq /k E with the mass dipole (asymmetry). The cross phenomenological coefficient increases rapidly with the mass dipole and it saturates at large dipole values (see Fig. 4). The coefficient follows similar trends as L nn increasing in magnitude with the colloid size.
Our results show that the stationary orientation is determined to a large extent by the mass asymmetry (mass dipole, moment of inertia) and geometry (friction coefficient) of the colloid. The orientation influences the thermophoretic force, and this explains the nonmonotonic dependence of the Soret coefficient reported in Fig. 3. We have computed the Soret coefficient at fixed orientations with respect to the direction of the heat flux. The Soret coefficient depends strongly on the angle that the colloid makes with the heat flux vector (see Fig. 5) reaching the lowest or highest value when the colloid is fully parallel or perpendicular to the direction of the thermal gradient. This observation agrees with results reported in Ref. [12]. The results reported in Fig. 5 offer a clear explanation to the dependence of the Soret coefficient reported in Fig. 3. For a given thermal gradient the colloid will move in the direction of the heat flux according to a given thermophoretic force. The force increases monotonically with the mass and saturates to a fixed value in the limit of very large masses. When the colloid features mass asymmetry a coupling of the heat flux is possible and the colloid aligns with the thermal gradient. The alignment is stronger for larger asymmetries [cos(θ ) → 1], and this results in a reduction of the thermophoretic force (see Fig. 5). Consequently, the Soret coefficient will be smaller than the one corresponding to a symmetric colloid with the same mass.
Finally, we have calculated the magnitude of the torque associated to the orientation of the asymmetric colloid. To do this we fixed the colloid at specific orientations in the interval (0, π ) relative to the direction of the thermal gradient. The torque is always positive in the interval (0, π ), indicating that the colloid would rotate with the heavy mass towards the hot region. The magnitude of the torque increases with the number of beads, N. As expected, the torque for the symmetric colloid is zero, consistent with the observation that it does not adopt a preferred orientation. When considering SI units, the magnitude of the maximum torque calculated here (see Fig. 4) τ/ ∼ 3 will scale linearly with the characteristic energy scale of the system under investigation. Using an energy of 1 kJ/mol, corresponding to typical dispersion interactions, we get ∼5 pN nm, to be compared with torques achieved experimentally using metallic nanoparticles, 100 pN nm [14].
IV. CONCLUSIONS AND FINAL REMARKS
The physical principle discussed here might offer new avenues to manipulate colloidal suspensions consisting of colloids with mass distributions that are inhomogeneous. The thermal field will induce a transient torque and therefore a rotation in these colloids, which could be observed in experimental studies. Our work indicates that colloids of similar shape and different internal mass distribution would experience different thermophoretic forces. This effect can lead to the fractionation of colloids of different mass composition in a thermal field, due to their different thermophoretic forces. Experimental studies of colloidal suspensions consisting of colloids with different composition would be very helpful to test experimentally the thermophoretic coupling effect discussed in our work. Our theoretical formulation provides clues on what key variables can be modified to tune the orientational effect. Specifically, we expect that increasing the moment of inertia or reducing the friction will enhance the thermal orientation effect. Further work considering colloidcolloid interactions will be needed to get a full picture on the thermophoretic torque. One key message from our work is that indeed it is possible to induce orientation in axially symmetric colloids using homogeneous fields. This can be achieved by tuning the internal mass distribution of the colloid and, therefore, this physical effect is driven by the internal degrees of freedom of the colloids.
ACKNOWLEDGMENTS
We acknowledge the EPSRC-UK (Grant No. EP/J003859/1) and the EU NanoHeal ITN project grant agreement No. 642976 for financial support. J.M.R. thanks The Leverhulme Trust for the award of a Leverhulme Professorship to visit the Department of Chemistry at Imperial College London. We thank the Imperial College High Performance Computing Service for providing computational resources.
APPENDIX
For the shish-kebab model studied here the friction is given by ζ r = πηL 3 /[3 ln(L/2σ )] [21], where η is the viscosity and L = Nσ is the length of the colloid. For the model used in the paper to describe the mass asymmetry we can define the moment of inertia as where m and m end are the masses of the normal and heavy beads, respectively. r j are the coordinates of the beads along the colloid axis and r COM is the coordinate of the center of mass of the colloid, which is given by The moment of inertia can be simplified to give | 4,587.8 | 2018-05-09T00:00:00.000 | [
"Physics"
] |
S-asymptotically ω-periodic solutions in the p-th mean for a Stochastic Evolution Equation driven by Q-Brownian motion
In this paper, we study the existence (uniqueness) and asymptotic stability of the p-th mean S-asymptotically ω-periodic solutions for some non-autonomous Stochastic Evolution Equations driven by a Q-Brownian motion. This is done using the Banach fixed point Theorem and a Gron-wall inequality.
Introduction
Let (Ω, F , P) be a complete probability space and (H, ||.||) a real separable Hilbert space. We are concerned in this paper with the existence and asymptotic stability of p-th mean S-asymptotically ωperiodic solution of the following stochastic evolution equation where (A(t)) t≥0 is a family of densely defined closed linears operators which generates an exponentially stable ω-periodic two-parameter evolutionary family. The functions f : R + × L p (Ω, H) → L p (Ω, H), g : R + × L p (Ω, H) → L p (Ω, L 0 2 ) are continuous satisfying some additional conditions and (W (t)) t≥0 is a Q-Brownian motion. The spaces L p (Ω, H), L 0 2 and the Q-Brownian motion are defined in the next section.
The concept of periodicity is important in probability especially for investigations on stochastic processes. The interest in such a notion lies in its significance and applications arising in engineering, statistics, etc. In recent years, there has been an increasing interest in periodic solutions (pseudo-almost periodic, almost periodic, almost automorphic, asymptotically almost periodic, etc) for stochastic evolution equations. For instance among others, let us mentioned the existence, uniqueness and asymptotic stability results of almost periodic solutions, almost automorphic solutions, pseudo almost periodic solutions studied by many authors, see, e.g. ([1]- [11]). The concept of Sasymptotically ω-periodic stochastic processes, which is the central question to be treated in this paper, was first introduced in the literature by Henriquez, Pierri et al in ( [12,13]). This notion has been developed by many authors.
In the literature, there has been a significant attention devoted this concept in the deterministic case; we refer the reader to ([14]- [20]) and the references therein. However, in the random case, there are few works related to the notion of S-asymptotically ω-periodicity with regard to the existence, uniqueness and asymptotic stability for stochastic processes. To our knowledge, the first work dedicated to S-asymptotically ω-periodicity for stochastic processes is due to S. Zhao and M. Song ( [21,22]) where they show existence of square-mean S-asymptotically ω-periodic solutions for a class of stochastic fractional functional differential equations and for a certain class of stochastic fractional evolution equation driven by Levy noise. But until now and to the best our knowledge,there is no investigations for the exis-S. M. Manou-Abi et al. / Advances in Science, Technology and Engineering Systems Journal Vol. 2, No. 5, 124-133 (2017) tence (uniqueness), asymptotic stability of p-th mean S-asymptotically ω-periodic solutions when p > 2.
This paper is organized as follows. Section 2 deals with some preliminaries intended to clarify the presentation of concepts and norms used latter. We also give a composition result, see Theorem 1. In section 3 we present theoretical results on the existence and uniqueness of S-asymptotically ω-periodic solution of equation (1), see Theorem 2. We also present results on asymptotic stability of the unique S-asymptotically ω-periodic solution of equation (1), see Theorem 3.
Preliminaries
This section is concerned with some notations, definitions, lemmas and preliminary facts which are used in what follows.
p-th mean S asymptotically omega periodic process
Assume that the probability space (Ω, F , P) is equipped with some filtration (F t ) t≥0 satisfying the usual conditions. Let p ≥ 2. Denote by L p (Ω, H) the collection of all strongly measurable p-th integrable H-valued random variables such that The collection of p-mean S-asymptotically ωperiodic stochastic process with values in H is then denoted by SAP ω L p (Ω, H) .
A continuous bounded stochastic process X, which is 2-mean S-asymptotically ω-periodic is also called square-mean S-asymptotically ω-periodic.
Remark 1 Since any p-mean S-asymptotically ωperiodic process X is L p (Ω, H) bounded and continuous, the space SAP ω L p (Ω, H) is a Banach space equipped with the sup norm : which is jointly continuous, is said to be p-mean Sasymptotically ω periodic in t ∈ R + uniformly in X ∈ K where K ⊆ L p (Ω, K) is bounded if for any > 0 there exists L > 0 such that for all t ≥ L and all process X : R + → K Definition 5 A function F : R + ×L p (Ω, H) → L p (Ω, H) which is jointly continuous, is said to be p-mean asymptotically uniformly continuous on bounded sets K ⊆ L p (Ω, H), if for all > 0 there exists δ > 0 such that H) is bounded and p-mean asymptotically uniformly continuous on bounded sets. Assume that X : R + → L p (Ω, H) is a p-mean S asymptotically ω-periodic process. Then the stochastic process (F(t, X(t))) t≥0 is p-mean S-asymptotically ω periodic.
Proof 1 Since X : R + → L p (Ω, H) is a p-mean Sasymptotically ω-periodic process, for all > 0, there exists T > 0 such that for all t ≥ T : In addition X is bounded that is We have : Taking into account (2) and using the fact that F is pmean asymptotically uniformly continuous on bounded sets, there exists δ = and L = T such that for all t ≥ T : Similarly, using the p-mean S-asymptotically ω periodicity in t ≥ 0 uniformly on bounded sets of F it follows that for all t ≥ T : Bringing together the inequalities (3) and (4), we thus obtain that for all t ≥ T > 0 E ||F(t + ω, X(t + ω)) − F(t, X(t))|| p ≤ so that the stochastic process t → F(t, X(t)) is p-mean Sasymptotically ω-periodic.
is p-mean uniformly S-asymptotically ω-periodic in t ∈ R + uniformly on bounded sets and satisfies the Lipschitz condition, that is, there exists constant L(F) > 0 such that . Let X be an p-mean S asymptotically ω-periodic proces, then the process (F(t, X(t))) t≥0 is pmean S-asymptotically ω-periodic.
For the proof, the reader can refer to [22] whenever p = 2. The case p > 2 is similar. Now let us recall the notion of evolutionary family of operators.
is called an evolutionary family of operators whenever the following conditions hold: For additional details on evolution families, we refer the reader to the book by Lunardi [23].
Q-Brownian motion and Stochastic integrals
Let (B n (t)) n≥1 , t ≥ 0 be a sequence of real valued standard Brownian motion mutulally independent on the filtered space (Ω, F , P, F ). Set where λ n ≥ 0, n ≥ 1, are non negative real numbers and (e n ) n≥1 the complete orthonormal basis in the Hilbert space (H, ||.||). Let Q be a symmetric nonnegative operator with finite trace defined by Let (K, ||.|| K ) be a real separable Hilbert space. Let also L(K, H) be the space of all bounded linear operators from K into H. If K = H, we denote it by L(H).
the space of all Hilbert-Schmidt operators from H 0 to H equipped with the norm In the sequel, to prove Lemma 4 and Theorem 2 we need the following Lemma that is a particular case of Lemma 2.2 in [24] (see also [25]).
(ii) There exists some constant C p > 0 such that the following particular case of Burkholder-Davis-Gundy inequality holds : In the sequel, we'll frequently make use of the following inequalities :
Main results
In this section, we investigate the existence and the asymptotically stability of the p-th mean Sasymptotically ω-periodic solution to the already defined stochastic differential equation : S. M. Manou-Abi et al. / Advances in Science, Technology and Engineering Systems Journal Vol. 2, No. 5, 124-133 (2017) where A(t), t ≥ 0 is a family of densely defined closed linear operators and 2 ) are jointly continuous satisfying some additional conditions and (W (t)) t≥0 is a Q-Brownian motion with values in H and F t -adapted.
Throughout the rest of this section, we require the following assumption on U (t, s) : (H1): A(t) generates an exponentially ω-periodic stable evolutionnary process (U (t, s)) t≥s in L p (Ω, H), that is, a two-parameter family of bounded linear operators with the following additional conditions : (1), satisfies the following relation Thus dg(s) = U (t, s)f (s, X(s))ds + U (t, s)g(s, X(s))dW (s). (6) Integrating (6) on [0, t] we obtain that Therefore, we define Definition 7 An (F t )-adapted stochastic process (X(t)) t≥0 is called a mild solution of (1) if it satisfies the following stochastic integral equation : U (t, s)g(s, X(s))dW (s).
The existence of p-th mean Sasymptotically ω-periodic solution
We require the following additional assumptions: (H. 2) The function f : R + × L p (Ω, H) → L p (Ω, H) is p-mean S-asymptotically ω periodic in t ∈ R + uniformly in X ∈ K where K ⊆ L p (Ω, H) is a bounded set.
Moreover the function f satisfies the Lipschitz condition, that is, there exists constant L(f ) > 0 such that 2 ) is a bounded set. Moreover the function g satisfies the Lipschitz condition, that is, there exists constant L(g) > 0 such that
It is easy to check that F is bounded and continuous. Now we have : Let p and q be conjugate exponents. Using Hölder inequality, we obtain that Using Hölder inequality, we obtain that J(t) where
Proof 3
We define h(s) = g(s, φ(s)). Since the hypothesis (H.3) is satisfied, using Lemma 1, we deduce that the function h is p-mean S asymptotically ω periodic.
It is easy to check that F is bounded and continuous. We have : where www.astesj.com S. M. Manou-Abi et al. / Advances in Science, Technology and Engineering Systems Journal Vol. 2, No. 5, 124-133 (2017) where the constant C p will be precised in the next lines. We have EJ(t) where Note that for all t ≥ 0,
Estimation of EJ 1 (t).
Assume that p > 2. Using Hölder inequality between conjugate exponents p p−2 and p 2 together with Lemma 2, part (ii), there exists constant C p such that : Assume that p = 2. By Lemma 2, part (i) we get :
Assume that p = 2. By Lemma 2, part (i) and Cauchy-Schwarz inequality we have
Note also that for t ≥ T : www.astesj.com S. M. Manou-Abi et al. / Advances in Science, Technology and Engineering Systems Journal Vol. 2, No. 5, 124-133 (2017) so that This implies that
Then the stochastic evolution equation (1) has a unique p-mean S-asymptoticaly ω-periodic solution.
Proof 4 We define the nonlinear operator Γ by the expression According to the hypothesis (H1) we have : According to Lemma 3 and Lemma 4, the operators ∧ 1 and ∧ 2 maps the space of p-mean S-asymptotically ωperiodic solutions into itself. Thus Γ maps the space of p-mean S-asymptotically ω periodic solutions into itself.
Case p > 2 : By Lemma 2,part (ii) and Hölder in- Consequently, if Θ < 1, then Γ is a contraction mapping. One completes the proof by the Banach fixed-point principle.
Stability of p-mean S asymptotically ω periodic solution
In the previous section, for the non linear SDE, we obtain that it has a unique p-mean S-asymptotically ωperiodic solution under some conditions. In this section, we will show that the unique p mean S asymptotically ω periodic solution is asymptotically stable in the p mean sense.
Recall that
Definition 8 The unique p-mean S asymptotically ω periodic solution X * (t) of (1) is said to be stable in p-mean sense if for any > 0, there exists δ > 0 such that stands for a solution of (1) with initial value X(0).
Definition 9
The unique p-mean S asymptotically ω periodic solution X * (t) is said to be asymptotically stable in p-mean sense if it is stable in p-mean sense and lim t→∞ E||X(t) − X * (t)|| p = 0 The following Gronwall inequality is proved to be useful in our asymptotical stability analysis.
Lemma 5 Let u(t) be a non negative continuous functions for t ≥ 0, and α, γ be some positive constants. If whenever p > 2. (ii) whenever p = 2.
Then the p-mean S-asymptotically solution X * t of (1) is asymptotically stable in the p-mean sense.
Assume that p > 2. Using Hölder inequality we have which is equivalent to our condition (14). Therefore X * is asymptotically stable in the p-mean sense. | 2,994.6 | 2017-01-01T00:00:00.000 | [
"Mathematics",
"Engineering"
] |
Transcriptomic and Mutational Analysis Discovering Distinct Molecular Characteristics Among Chinese Thymic Epithelial Tumor Patients
Introduction Thymic epithelial tumors (TETs) are malignancies arising from the epithelium of the thymic gland, rare but with relatively favorable prognosis. TETs have different pathological subtypes: thymomas and thymic carcinoma, and they show different clinical characteristics regarding prognosis, pathology, and molecular profiles, etc. Although some studies have investigated the pathogenesis of TETs, more molecular data is still needed to further understand the underlying mechanisms among different TETs subtypes and populations. Methods In this study, we performed targeted gene panel sequencing and whole transcriptome sequencing on the tumor tissues from 27 Chinese TET patients, including 24 thymomas (A, AB, and B subtypes) and 3 thymic squamous cell carcinomas. We analyzed the genetic variations and differentially expressed genes among multiple TET subtypes. Moreover, we compared our data with the published The Cancer Genome Atlas (TCGA) TET data on both the genetic and transcriptomic levels. Results Compared with the TCGA TET genomic data, we found that NF1 and ATM were the most frequently mutated genes (each with a frequency of 11%, 3/27). These mutations were not mutually exclusive, since one B1 thymoma showed mutations of both genes. The GTF2I mutation was mainly enriched in subtype A and AB thymomas, consistent with the previous reports. RNA-seq results unveiled that the genes related to thymus development (FGF7, FGF10 and CLDN4) were highly expressed in certain TET subtypes, implicating that the developmental process of thymus might be linked to the tumorigenesis of these subtypes. We found high expression of CD274 (PD-L1) in B2 and B3 thymoma samples, and validated its expression using immunohistochemistry (IHC). Based on the expression profiles, we further established a machine learning model to predict the myasthenia gravis status of TET patients and achieved 90% sensitivity and 70.6% specificity in the testing cohort. Conclusion This study provides the first genomic and transcriptomic analysis of a Chinese TET cohort. The high expression of genes involved in thymus developmental processes suggests the potential association between tumorigenesis of TETs and dysregulation of developmental pathways. The high expression of PD-L1 in B2 and B3 thymomas support the potential application of immunotherapy on certain thymoma subtypes.
INTRODUCTION
Thymoma and thymic carcinoma (TC) are thymic epithelial tumors (TETs) with low occurrence rate, roughly 1-5 cases per million population per year (1,2). Pathologically, thymomas can be stratified into A, AB, B1, B2 and B3 subtypes depending on the morphology and the proportion of cancer cells and lymphocytes (3). A and AB thymomas are generally considered to be low malignancy, whereas B and TC subtypes are associated with moderate and high malignancy, respectively (1,2). Autoimmune disorders, such as myasthenia gravis (MG), are the most frequent syndrome co-occurring with thymomas (4). A few studies have focused on characterizing genomic variations and expression of certain genes in thymomas and TCs (5)(6)(7)(8)(9), which provide new insights to decipher mechanisms of tumorigenesis and develop novel therapeutic strategies for clinical practice. However, more molecular data is still needed to deeply understand the TET etiology among different subtypes and populations.
Mutations and aberrant expression levels of several genes have been identified in thymoma and TC. EGFR is highly expressed in some thymoma and TC samples, but only a few mutations have been identified within EGFR in thymoma samples (10). Mutations and overexpression of ERBB2, KRAS, and TP53 are found in TC samples (5). The high expression of KIT has also been confirmed in TCs (10,11). However, mutations have been rarely found in KIT in either TCs or thymomas (12). A leucine to histidine substitution (L383H, L404H) of GTF2I was recently identified to be one of the most frequent mutations in A and AB thymomas (13). In vitro experiments showed the mutations were associated with the tumorigenesis of thymomas (13). Radovich et al. also demonstrated the high prevalence of GTF2I L424H mutation in A and AB subtypes (6).
These studies provide a more thorough understanding of the mutational landscapes of TET, although mechanistic insight is still needed to understand the relationship between different subtypes and to provide new clues for development of therapeutic strategies.
The TCGA TET study represents the most systematic investigation on molecular profiles of thymomas and TCs so far (6). In the study, 117 samples from various thymoma subtypes and TCs were analyzed by WES, RNA-seq, miRNAseq, DNA methylation and RPPA arrays. Unsupervised clustering resulted in four clusters which were consistent with the pathologic classifications. The auto-immune MG was linked to somatic copy number variations and the intratumor overexpression of auto-antigen related genes, such as CHRNA1, NEFM, and RYR3 (6). GTF2I mutated samples had higher expression in several pathways related to cancer and cell signaling. Meanwhile, the TCGA TET study still leaves the interpretation of expression differences between subtypes as an open question. Importantly, it was worth noting that more than 80% of people in the TCGA cohort were Caucasian, and the Asian population was under-represented. More molecular studies on TET patients of other ethnicities are still needed.
In this pilot study, we present a comprehensive analysis on genomic and transcriptomic data of a Chinese TET cohort of 27 patients. To unveil the specific mechanisms involved in TET tumorigenesis of Asians, we made a thorough comparison between our cohort and the TCGA TET cohort on both the genomic variation and expression profiles of each subtypes.
Patients and Sample Collection
Twenty-seven TET patients, including 24 thymomas and 3 thymic squamous cell carcinomas, were enrolled in this study ( Table 1 and Supplementary Tables 1, 2). All patients were Chinese and were treated in Peking Union Medical College Hospital. Clinical characteristic information regarding age, gender, race, histological classification, and clinical stage (Masaoka and TNM staging) were collected. Fresh frozen tumor tissue and white blood cells were collected from each patient during surgery or biopsy with informed consent forms and approval from the Ethics Committee of the Peking Union Medical College Hospital. Part of fresh tissue sampled from multiple distinct regions of the resected tumor tissue was made to one or several formalin-fixed, paraffin-embedded blocks in the pathology department of the hospital. The histologic subtypes for the 27 patients were examined following the 2015 World Health Organization (WHO) classification of tumors of the thymus (4th edition) (14). The CT images and immunohistochemistry (IHC) results were provided in the Supplementary Tables 1, 2. Typical hematoxylin and eosin (H&E) result of each histological subtype (A, AB, B1, B2, B3 and TC) was also obtained (Supplementary Figure 1). The remained fresh tissue was frozen by liquid nitrogen and transported to the molecular lab for DNA panel and RNA-seq analysis.
Sample Processing
Before DNA and RNA extraction, a frozen tissue section for each sample was cut by a cryostat (Leica CM 1950, Leica Biosystems, Wetzlar, Germany), then fixed on glass slide and stained by Hematoxylin and Eosin (H&E) to examine the tumor percentage of the tissue sample. The tumor cell proportions are confirmed to be above 20%. DNA and total RNA were extracted from fresh frozen tissue using the DNeasy Blood&Tissue Kit (Qiagen, Valencia, CA, USA) and RNeasy Mini Kit (Qiagen), respectively, following the manufacturer's protocols. Genomic DNA was extracted from peripheral blood using the QIAamp DNA Mini and Blood Mini Kit (Qiagen) per manufacturer's protocol. The Qubit 3.0 Fluorometer and Qubit dsDNA HS Assay kit (Life Technologies, Carlsbad, CA) were used to quantify DNA following the manufacturer's recommended protocol. The quality and quantity of extracted RNA were evaluated with NanoDrop 2000 (ThermoFisher, Pittsburgh, PA, USA) and Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA).
DNA Sequencing
NGS library preparation was performed to the DNA samples using KAPA Hyper Prep kit (Kapa Biosystems, Wilmington, MA, USA) according to the manufacturer's instruction and hybridized with probes targeting to the whole exons of 474 cancer-related genes using SureSelectXT Target Enrichment System (Agilent Technologies, Santa Clara, CA, USA). The libraries were sequenced using the HiSeq-X10 platform (Illumina, San Diego, CA, USA).
FASTQ files of raw sequencing reads were generated using bcl2fastq Conversion Software (Illumina, Version: 2.17.1.14). Low quality reads were filtered out and short reads were aligned to hg38 genome using bwa-0.7.15 (15). PCR duplicates were removed using GATK Picard. Indel realignment and base recalibration were performed by GATK to improve indel detection sensitivity and correct bias of base quality scores (16). MuTect2 and GATK were used for single nucleotide variation (SNV) and indel calling, respectively. All variants were annotated with HGVS using snpEff-2.3.7 (17). Only coding region variants (SNV and INDEL) with mutation allele frequencies (MAF) ≥ 5% were retained for further analysis. The tumor mutation burden (TMB) was calculated by counting the non-synonymous somatic mutations in coding regions and normalized by the panel size as previously described (18).
RNA Sequencing
The RNA-seq library was constructed using NEBNext Ultra Directional RNA Library Prep Kit (New England Biolabs, Ipswich, MA, USA) according to the manufacturer's instructions, and qualified using Qubit 3.0 Fluorometer (Life Technologies, Carlsbad, CA, USA) and 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA). Libraries were sequenced on HiSeq-X10 platform (Illumina, San Diego, CA, USA).
After FASTQ was generated, low quality reads were filtered out and short reads were mapped to hg38 reference genome and ensemble 93 genome annotation using STAR (19). Gene expression quantification was performed using RSEM (20) to obtain the fragment per kilo exon per million reads (FPKM) value of each gene. Coefficient of variation (CV) was calculated for each gene. Three thousand genes with the largest CV were selected for clustering. All samples were clustered by hierarchical clustering with ward D2 method. Cluster heat maps were generated using pheatmap. The sample and gene cluster numbers were determined using ConsensusCluster (21). Genes with median FPKM larger than 4 were considered as highly expressed in each cluster. STRING web tools was used to perform pathway enrichment analysis of highly expressed genes (22). The differential gene expression analysis among clusters was performed using DESeq2 (23).
Protein-Protein Interaction Network Analysis
PPI network analysis was performed to investigate the potential effects of somatic mutations on cellular functional networks. Mutations in genes were first filtered by the gene expression levels based on the FPKM (fragment per kilo exon per million reads) values (> 4). These highly expressed genes were then mapped to the PPI network from STRING database, and further subjected to gene clustering using the Markov clustering (MCL) method provided on STRING website.
Immunohistochemistry for PD-L1
Before PD-L1 immunohistochemistry (IHC), we first estimated the tumor cell percentage (TCP) of FFPE tissue using hematoxylin and eosin (H&E). FFPE samples with more than 100 tumor cells were further examined by IHC using PD-L1 IHC 22C3 pharmDx (Agilent Technologies, Santa Clara, CA, USA) according to the manufacturer's instructions. PD-L1 expression in TET tumor tissue is determined by the tumor proportion score (TPS). TPS is the percentage of viable tumor cells showing partial or complete membrane staining at any intensity (≥ 1+) relative to all viable tumor cells present in the sample, which is defined accordingly: Based on the TPS, the expression level of PD-L1 protein is defined as 'High expression' (TPS ≥ 50%), 'Positive' (1% ≤ TPS < 50%) and 'Negative' (TPS < 1%).
Prediction on Myasthenia Gravis
Using SVM LIB-SVM 3.25 was used for construction of support vector machine (SVM) model in this study (24). Gene expression profiles of the TCGA cohort and our cohort were normalized using the median absolute deviation (MAD) method (6). The prediction power of each gene in the top variable gene list was evaluated using the single gene SVM model by the area under the curve (AUC) from the Receiver Operating Characteristics (ROC) analysis. The forward selection was then used to construct a gene set for prediction. SVM hyper parameters were selected using the top three gene sets. The top three gene set models were used on the validation cohort to evaluate the generalization error performance.
Clinical Characteristics
Twenty-seven patients with clinical diagnoses of thymoma or TC (thymic squamous cell carcinoma) were enrolled in this study ( Table 1 and Supplementary Tables 1, 2). The median age of all the patients was 53, ranging between 25 and 70 years, and there were 12 male and 15 female patients. The histologic types for the 27 patients were determined by pathological examination following the 2015 World Health Organization (WHO) classification of tumors of the thymus (4th edition) (14), which were classified into A type (n = 2), AB type (n = 6), B1 type (n = 4), B2 type (n = 7), B3 type (n = 5), and TC (n = 3) (Supplementary Figure 1). All the three TC samples were histologically diagnosed as thymic squamous cell carcinomas with CD5+ and CD117+. Most of the patients were in Masaoka stage I (n = 13), with the remained patients distributed in stage II (n = 5), and stage III (n = 9). For TNM staging, there were also 13, 5, and 9 patients distributed in stage I, II, and III, respectively.
Genome Variation of Subtypes in TETs
DNA samples from the paired tissue and white blood cells for each patient were sequenced using a 476-gene panel (Supplementary Table 3). In total, 27 tissue samples from thymoma or TC were sequenced. We used MAF ≥ 5% as the cutoff for variant filtering. Fifty-eight genomic variations from 47 genes were identified ( Figure 1). The average TMB of the 27 samples was as low as 0.82 mut/MB, which was consistent with the low mutation burden discovered in the TCGA TET study (6). Ten samples possess a TMB of 0 mut/MB. Four of the six samples with TMB > 1 mut/MB were found in the B subtypes. One A subtype and two B subtypes had exceptionally high TMB > 5 muts/ MB. Of the three TC samples, only the sample with MSH6 mutation had a TMB of 0.95 mut/MB and the other two was 0 mut/MB. Consistently, the two carcinomas from Asian patients in the TCGA cohort also had a low mutational burden (0.13 and 0.66 mut/MB). Among the 47 genes carrying at least one mutation, NF1 and ATM were the most frequently mutated genes (11%, 3/27) in all samples. The predicted pathogenicity of the identified mutations was also retrieved from NCBI ClinVar database (25). Three NF1 mutations were detected in one B1 (G1090*, nonsense mutation, unknown), one B2 (D1067V, missense mutation, likely benign) and one B3 (P1087L, missense mutation, unknown) subtypes. Three ATM mutations were identified in one A (P424H, missense mutation, uncertain significance), one B1 (R493G, missense mutation, unknown) and one B3 (S169F, missense mutation, uncertain significance) subtypes. These mutations were not mutually exclusive, since the B1 thymoma showed mutations of both genes. One AB subtype had a KRAS A59del mutation, and another AB subtype had an NRAS Q61K mutation. Only one TC sample were identified to have somatic mutations, which are MSH6 I927M (missense mutation, uncertain significance) and TERT R972S (missense mutation, unknown). IHC result didn't show loss of MSH6 protein expression and thus didn't imply microsatellite instable status. MSH6 I927M is predicted to be uncertain significance as also supported by the finding that the respective TC sample did not exhibit a high TMB as would be expected in a microsatellite instable tumor. The TC samples in our study had fewer somatic mutations than thymoma samples, which might implicate that somatic mutation related tumorigenesis differs in thymomas and TCs. We found significant enrichment of mutated genes in both the RAS (q=7E-8) and PI3K-Akt signaling pathways (q=2E-7), including AKT3, CSF1R, FGFR4, KRAS, NRAS, PIK3CA, and PIK3CB (Figure 1 and Supplementary Table 4), and those mutated genes were mainly enriched in thymoma samples, suggesting that RAS and PI3K-Akt signaling pathways may involve in the tumor development of thymoma. In summary, the two most frequently mutated genes in our cohort, NF1 and ATM, could be somatic mutations specific to Chinese TET patients, as no NF1 or ATM mutations were found in the TCGA TET cohort.
Perturbation of Somatic Mutations on Protein-Protein Interactions
The functions of somatic mutations in our TET cohort were further explored for their potential perturbation on cellular PPI network. Twenty-eight highly expressed genes harboring 37 nonsynonymous mutations were used for perturbation analysis on the PPI network ( Figure S2A). Another twenty-two genes were also included in the network if they had at least one interaction with one of the 28 highly expressed genes. Clustering analysis of the 28 genes resulted in 6 clusters on the PPI network. The largest cluster had 9 genes, primarily belonging to the RAS signaling pathway. Other clusters were much smaller and showed no significant function enrichment. Furthermore, we performed the same analysis using the mutational landscape of each subtype separately. Different subtypes showed significantly different network topologies. The largest cluster of A and AB type ( Figures S2B, C) contained genes in the RAS and PI3K-Akt signaling pathways, but no overlap was observed between the two subtypes. The largest cluster of B subtypes contained four genes that did not show significant enrichment in any pathways ( Figure S2D). The results from the pathway analyses and PPI network clustering analysis were largely consistent, and suggested that the functional consequences of mutated genes in thymoma were closely related to key signaling pathways in cancer, which may contribute to the tumorigenesis of TET.
GTF2I Mutation in Thymoma Samples
The GTF2I L424H mutation was a newly identified recurrent genomic variation in A and AB subtypes of thymomas (13) Since the gene was not covered by our gene panel, we analyzed the RNA-seq data for the mutation status of GTF2I in our cohort (Supplementary Table 5). Eleven samples showed the GTF2I L424H mutation, including all the six patients in the AB subtype and one patient in A subtype, which was consistent with the observation in the TCGA-TET study that the GTF2I mutation mainly occurred in the A and AB subtypes (6). Besides that, we also detected GTF2I L424H mutation in two B2 and two B3 Figure S3). The Sanger results were consistent with the RNA-seq data, which further confirmed the occurrence of GTF2I mutation in thymoma samples.
Hierarchical Clustering on Expression Profiles of Thymomas and Thymic Carcinomas
RNA-seq data from 26 samples passed the quality control and were used for the hierarchical clustering analysis. Based on the distinct expression patterns, 5 clusters (C1-C5) were generated which were closely related to the pathogenic subtypes of thymomas and TCs (Figure 2A We also performed an integrative clustering analysis using a combination of the RNA-seq data from TCGA and our cohort to validate the consistency of the clusters and histologic subtypes. The combined clustering results showed that all C1 members were clustered with either A or AB samples in the TCGA data set. The A subtype that clustered together with TC subtypes, was confirmed to be an outlier in the integrative analysis of TCGA data and our data ( Figure S4). It is worth noting that the samples from Asian patients of the two cohorts were evenly distributed among non-Asian clusters. No ethnics-specific gene expression profiles were discovered in this analysis.
Expression Differences Among Thymic Epithelial Tumor Subtypes
We next performed cluster-based differential expression analysis to explore the gene functional differences among different TET subtypes. Differential expression analysis was performed within the five clusters of the hierarchical clustering. Differentially expressed genes (DEGs) from each cluster were then analyzed for functional pathways enrichment ( Figure 2B). We observed that subtypes of TET (C3, C4 and C5) tended to enrich in the pathways related to immune system and cell adhesion/migration, whereas the subtypes of C1 and C2 were enriched in the pathways related to developmental processes and cellular components. In the sub-branches of C1 and C2, the enriched genes functions of the C2 cluster were immune processes, whereas the C1 was related to nervous system development. Those results showed that the differentially expressed genes were more enriched in immune system and cell adhesion/migration pathways in the more malignant subtypes than that in the less malignant subtypes.
We further sought to identify genes that were specifically expressed in each cluster and analyze their biological functions ( Figure 3A). Transcription factor EHF was highly expressed in C1 (Figures 3A and S5A). The high expression of EHF has been related to the progression of gastric cancer and to the elevation of HER family proteins ERBB3 and ERBB4 (26). In our study, C1 samples had higher expression levels of ERBB3 and ERBB4 ( Figures S6A, B), which together with EHF, suggested possible implication of the C1 cluster to the development of epithelial malignancies (27)(28)(29). Two thymus development related genes, CLDN4 (Figures 3B and S5B) and TNFRSF11A ( Figure S7B), were also highly expressed in C1 samples. RANK (coded by TNFRSF11A) is highly expressed in medullary progenitor cells and also an important regulator in medullary formation by promoting the generation of AIRE+ mature medullary epithelial cells (30) Claudin-4, which is encoded by CLDN4, is a highly expressed gene marker for medullary epithelial stem cells (31). These findings may implicate that the tumorigenesis of A subtype is associated with the deregulation of gene expression and reconstitution of stem cell like properties of medullary epithelial cells.
E2F8 gene, which involves in cell proliferation and cancer development, was found to be highly expressed in C2, C4 and C5 clusters ( Figures 3C and S5C). E2F8 gene was previously reported highly expressed in several different cancers compared to their normal tissues (32,33). To investigate whether high expression of E2F8 gene was associated with epithelial cells and immature T cells, we calculated the correlation coefficient between E2F8 and TdT gene expressions using RNA-seq data for all the C2, C4 and C5 samples. The result showed a low Rsquared value of 0.36 which indicated E2F8 expression was not highly correlated with TdT expression from immature T cells, and thus implied high expression of E2F8 possibly originate from both the immature T cells and epithelial cells. Compared to C1 cluster, we did not find any expression of genes related to development of either medullary or cortex epithelial cells in C2 cluster, which might indicate a difference in tumorigenesis for the AB subtype of thymoma.
C3 contains mostly TC subtypes. KIT was highly expressed in all TC subtypes in our cohort ( Figure S8A), consistent with the previous research (12 ) We also found that the cell surface receptor PDGFRA was highly expressed in the TC subtypes ( Figure S8B). Although no statistical significance was observed for either KIT or PDGFRA due to the small size of TC samples, the trend was observed for the median expression level. Several FGF family genes, like FGF1, FGF7 and FGF11 were also highly expressed in the TC group (FGF7, Figures 3D and S5D; FGF1, Figure S6C; FGF11, Figure S6D). In the cortex, mesenchymal cells produce FGF7 that promotes the proliferation of epithelial cells (34). The high expression of FGF7 in TC subtypes supports this mechanism of self-maintenance of epithelial proliferation in TC subtypes.
C4 consisted of five B2 or B3 subtypes. FGF10, a regulator of cortex epithelial cell proliferation, was highly expressed in C4 cluster ( Figures 3E and S5E), which may be related to the mechanism of self-maintenance of proliferation of B2 and B3 subtypes. Importantly, CD274 (PD-L1, Figures 3F and S5F) and PDCD1LG2 (PD-L2, Figure S7A), encoding PD-1 ligand, were highly expressed in this cluster. We further performed PD-L1 IHC and the results showed the patients with B2 or B3 thymomas in the cluster C4 had high PD-L1 expressions (Figure 4), which suggested patients with B2 and B3 thymomas could potentially benefit from immunotherapy (35)(36)(37). The biological roles for the highly expressed genes in C5 remain to be further investigated.
Expression profiling and pathway analysis showed the differences of highly expressed genes in each cluster. These data showed that histologic subtypes and molecular clustering patterns were mostly consistent, which was also supported by the TCGA analysis. For low-risk subtypes (A and AB), the molecular function was associated with tissue development and cell proliferation. The absence of transcription factors, such as EHF and E2F8, may represent the initial steps of gene dysregulation. For the more malignant types (B2 and B3, and TC), more cancer progression related genes were highly expressed. Several important genes associated with thymus development, such as CLDN4, FGF7 and FGF10, showed high expression in certain thymoma subtypes and TCs, suggesting their potential role in the development of TETs.
Myasthenia Gravis Related Gene Analysis on Thymic Epithelial Cancer
Thymoma is often associated with an autoimmune thymus disease MG. We found 149 genes with differential expression between MG+ and MG-groups (Supplementary Table 6). Nonetheless, these DEGs did not contain genes involved in immunity or auto-antigens. Among the auto-antigen related DEGs between MG+ and MG-reported in TCGA, NEFM was highly expressed in the MG+ group in our cohort, but there was no statistically significant difference in comparison with the MGgroup ( Figure S9).
To investigate whether the MG status can be inferred from the gene expression profiles of TET patients, we constructed a machine learning model using SVM, in which the TCGA-TET samples (n = 104) were used as the training set to select the gene features, and 22 samples in our cohort were used as a validation set to verify the prediction performance of the model. We found that a three-gene model had optimal prediction performance between the TCGA cohort and our cohort ( Figure S10). The AUC of the training and validation set was comparable (0.840 vs 0.841), with 93.8% sensitivity and 64.0% specificity for the training set, and 90% sensitivity and 70.6% specificity in the testing cohort. The three genes used in this model were PPARGC1A, GABRA5, and NEFM, in which NEFM was related to MG according to the previous reports and the TCGA TET study (6).
DISCUSSION
The rare occurrence of TETs hinders clinical research and biomarker discovery. The recent integrative multi-omics study extended the knowledge of molecular signatures to the clinical characteristics of thymic epithelial tumors (6). In this pilot study, we performed genomic and transcriptomic analysis to explore the underlying molecular characteristics and mechanisms of a Chinese TET cohort. To the best of our knowledge, this is the first multi-omics study reported on a Chinese TET population.
The mutational landscape of our cohort showed several key differences from the TCGA dataset. First, the top mutated genes in our cohort did not match those in the TCGA cohort. Prevalent mutations in NF1 and ATM suggest potential difference of the mutational landscape in the Chinese TET population. Though our sample size was small, the difference in the mutation frequency of GTF2I L424H on the B subtypes still can provide preliminary evidence that the Chinese TET population may have different mutational landscape compared with the TCGA dataset. Another example of different mutational profiles between Chinese and Caucasians is different prevalence of driver mutations in lung adenocarcinoma (LUAD) (38). While east Asian LUAD patients are frequently mutated on EGFR L858R and 19del (40-55%) and less occurrence of KRAS mutations (8-12%), the Caucasians have a lower incidence of EGFR mutations (15-25%) and higher incidence of KRAS mutations (20-30%). The etiology of the Chinese TET patient could be possibly different from Caucasian population represented in TCGA. However, the expression profiles showed no obvious differences between different population groups, and the myasthenia gravis prediction model worked well for both cohorts. These results suggest that mutational landscape, which may involve in tumorigenesis, differs in populations, but with overall consistent expression patterns among them.
Consistent with the TCGA's findings, expression profiling clusters was largely consistent with the pathological subtypes. We further demonstrated that the less malignant TETs had more gene expression of developmental processes and cellular components whereas the more malignant TETs had more genes with altered expression that associated with immune system and cell adhesion/ migration. It was interesting to point out that several genes related to thymus medullary and cortex development such as CLDN4, FGF7, and FGF10 were linked to TETs. Such relationships unveil possible links between tumorigenesis and dysregulation of regulatory networks that can lead to an insightful understanding of the etiology of the disease. Targeting those genes might provide potential therapeutic clues with further cellular mechanism and clinical studies.
The expression profiling analysis also identified high expression of PD-L1 for most of the B2 and B3 subtypes in our cohort, suggesting immunotherapy opportunity for those patients. This is consistent with the previous studies which reported high PD-L1 expression in TET patients with more malignant subtypes, such as type B2 and B3 thymomas, and thymic carcinomas (39,40). Moreover, we proposed here the first machine learning model to predict myasthenia gravis status for TET patients, which provides new perspectives for tackling the problem and could probably help high risk myasthenia gravis patients in advance.
The limitation of this study is the relatively small number of cases in view of the heterogeneity of histological subtypes in TET patients. The size of the sample was largely restricted by the low incidence of TET and the sample availability from TET patients.
Another limitation is that we applied targeted gene panel rather than whole exome or whole-genome sequencing. Though the panel is relatively large and contains 476 closely cancer related genes, the resulting mutation profiles may still be limited and possibly miss novel somatic mutations associated with Chinese TET patients. Future studies should expand the sample size to further validate both the molecular profiles and the MG prediction model in Chinese TET patients and clinical applications.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Ethics Committee of the Peking Union Medical College Hospital. The patients/participants provided their written informed consent to participate in this study. | 6,902 | 2021-09-08T00:00:00.000 | [
"Biology",
"Medicine"
] |
Efficacy and Mechanism of Quercetin in the Treatment of Experimental Colitis Using Network Pharmacology Analysis
Quercetin, a flavonoid that is present in vegetables and fruits, has been found to have anti-inflammatory effects. However, the mechanism by which it inhibits colitis is uncertain. This study aimed to explore the effect and pharmacological mechanism of quercetin on dextran sodium sulfate (DSS)-induced ulcerative colitis (UC). Mice were given a 4% (w/v) DSS solution to drink for 7 days, followed by regular water for the following 5 days. Pharmacological mechanisms were predicted by network pharmacology. High-throughput 16S rDNA sequencing was performed to detect changes in the intestinal microbiota composition. Enzyme-linked immunosorbent assay and western blotting were performed to examine the anti-inflammatory role of quercetin in the colon. Quercetin attenuated DSS-induced body weight loss, colon length shortening, and pathological damage to the colon. Quercetin administration modulated the composition of the intestinal microbiota in DSS-induced mice and inhibited the growth of harmful bacteria. Network pharmacology revealed that quercetin target genes were enriched in inflammatory and neoplastic processes. Quercetin dramatically inhibited the expression of phosphorylated protein kinase B (AKT) and phosphatidylinositol 3-kinase (PI3K). Quercetin has a role in the treatment of UC, with pharmacological mechanisms that involve regulation of the intestinal microbiota, re-establishment of healthy microbiomes that favor mucosal healing, and the inhibition of PI3K/AKT signaling.
Introduction
Ulcerative colitis (UC) is an inflammatory bowel disease (IBD) with unclear pathogenesis. Its clinical symptoms include abdominal pain, diarrhea, and mucinous purulent bloody stool. Long disease duration and a high recurrence rate substantially impact the quality of life of patients [1]. Currently, salicylic acid and glucocorticoids are the most commonly used medications to treat UC, but they are associated with many adverse reactions, which restrict their long-term use [2]. Animal models of colitis frequently involve the administration of dextran sodium sulfate (DSS), which causes clinical and histological reactions resembling those seen in people with IBD [3][4][5][6].
Phosphatidylinositol 3-kinase (PI3K), a member of the intracellular lipid kinase family, can be divided into type I, II and III isoforms, of which type I plays a very important role in tumors [7]. Protein kinase B (PKB, as known as Akt), a serine/threonine kinase associated with protein kinase C, is a direct downstream target of PI3K [8,9]. The PI3K/AKT signaling pathway is critical for controlling the development and progression of inflammation [10], and it participates in the regulation and release of pro-inflammatory cytokines in the intestinal mucosa of UC patients [11]. Blocking the PI3K/Akt signaling pathway can reduce Park, MN, USA). Antibodies against phospho-Akt (Ser473) and Akt (C67E7) were obtained from Cell Signaling Technology (Danvers, MA, USA). The anti-occludin antibody was obtained from Proteintech Group (Wuhan, China). Transwell inserts (pore size of 0.4 µm) were purchased from Corning Inc. (Kennebunk, ME, USA).
Screening of Cellular Drug Delivery Concentrations
At the logarithmic growth stage, mouse colon epithelial cells (MCECs) were uniformly spread in 96-well plates at a growth density of 30%, and after 24 h of incubation, a blank group (no cells were inoculated), a control group, and quercetin administration groups with different concentrations (500, 250, 125, 62.5, 31.25, and 15.625 µM) were set up, with 6 replicate wells in each group. After 24 h of drug administration, each well was continued to incubate for 1 h after adding 10 µL of CCK-8 reagent. The absorbance (A) values of each group were measured at 450 nm by an enzyme marker, and the cell survival rate was calculated. Cell survival rate (%) = (A spiked − A blank)/(A control − A blank) * 100%, and the experiments described above were repeated three times. The effect of different concentrations of quercetin on the survival rate of MCEC cells varied greatly. It was found that 62.5, 31.25, and 15.625 µM of quercetin had no significant effect on the survival rate of MCEC cells for 24 h. Therefore, 62.5 µM was chosen as the quercetin administration condition.
Animals and Experimental Protocols
Jinan Pengyue Experimental Animal Breeding Co. Ltd. (Jinan, China) provided female BALB/c mice, which were 35-40 days old and 18-22 g in weight. An appropriate temperature and humidity were maintained in the rearing room, and a normal circadian rhythm was established to maintain the normal physiological activities of the mice. Through the Jining Medical University's Animal Care Committees, the animal care and protocols were authorized. A total of 40 BALB/c mice were randomly allocated into four groups (n = 10/group): untreated control, DSS model, DSS + 5-ASA, and DSS + quercetin. Except for the control group, mice were given a 4% (w/v) DSS solution to drink for 7 days before being given regular water for the next 5 days [3]. From day 1 to day 12, mice in the two treatment groups were administered 5-ASA (40 mg/kg) or quercetin (100 mg/kg) daily by gavage, while mice in the blank control and DSS model groups were administered normal saline. All mice were sacrificed on day 13, and their organs and feces were collected. Colon tissues from mice were fixed in 4% paraformaldehyde for H&E staining. The remaining colon tissues were stored in liquid nitrogen for western blot analysis. Feces samples obtained from the intestinal sections were transferred to a sterile tube using sterile forceps, then quickly placed into liquid nitrogen and stored at −80 • C immediately for microbiota analysis.
Evaluation of Colitis
During the experiment, body weight changes, bloody stool, fecal character and mental status were observed daily [3]. The disease activity index (DAI) scoring criteria are shown in Table 1.
Macroscopic Assessment and Histological Analysis
Colons were removed, opened longitudinally, washed with phosphate-buffered saline, then fixed in 4% paraformaldehyde and embedded in paraffin. Embedded tissues were sliced into sections of 4 mm thickness using a microtome, and then stained with H&E using a conventional protocol [3,32]. The histological change scoring criteria are shown in Table 2. ELISA kits were used to assess the secretion of IL-1β, TNF-α and IL-6 from colon tissues and supernatants of mouse colon epithelial cell (MCEC) cultures following the manufacturer's recommendations, as previously described [33]. Each experiment was performed three times. Cytokine levels are shown in pg·mL −1 .
16S rDNA Sequencing and Microbiota Analysis
Sequencing of 16S rDNA was performed using the following primer pair: forward (5 -AGRGTTTGATYNTGGCTCAG-3 ) and reverse (5 -TASGGHTACCTTGTTAS GACTT-3 ). Third-generation microbial diversity was based on the PacBio sequencing platform, and the marker gene was sequenced by single molecule real-time sequencing (SMRT Cell). The species composition of each sample was revealed by filtering, clustering or denoising the circular consensus sequence, and species annotation and abundance analysis as previously described [34]. The following analyses were carried out: annotation and taxonomy analysis of species, significant difference analysis, and diversity analysis (alpha and beta diversity). The names of the repository/repositories and accession number(s) can be found at: https: //www.ncbi.nlm.nih.gov/ (accessed on 23 September 2022), PRJNA881733.
Network Pharmacology
Targets of quercetin were gathered in TCMSP [35] [38] to standardize the target names and ultimately obtain drug-related targets. Similarly, the targets of UC found by searching the Gene Cards database [39] (https://www.genecards.org/, accessed on 15 March 2022) and OMIM database [40] (http://www.omim.org, accessed on 15 March 2022) using the keyword "ulcerative colitis" were overlapped, de-duplicated, and imported into the Uniprot database to standardize the target names and obtain the final UC disease targets.
Molecular Docking
We downloaded the 3D structure of quercetin in structure data file format from the Pubchem database (https://pubchem.ncbi.nlm.nih.gov, accessed on 25 May 2022), converted it to "mol2" format by Open Babel 3.1.1 software, used AutoDockTools to add hydrogen, set as ligand, determine the torque center and select the torsion key, and exported to PDBQT format. The target protein name was then entered into the Protein Data Bank (PDB) database (https://www.rcsb.org/, accessed on 25 May 2022), from which a human protein with one or more co-crystalline ligands and a low "resolution" value crystal structure was selected, saved in PDB format, dehydrogenated using AutoDockTools, set as a receptor and exported to PDBQT format. We adjusted the GridBox parameters by AutoDock 4.2.6 software [44] until the box wrapped all the receptor molecules, used the blind docking method to find the active site, exported the grid point parameter file (GPF), ran Autogrid 4, set the docking parameters and algorithm for docking, ran Autodock4, and checked the results. The docking results were visualized using PyMOL 2.4.0 software. Finally, to obtain the docking scores, the proteins and compounds were uploaded to DockThor [45] (https://www.dockthor.lncc.br/v2/, accessed on 25 May 2022) for online molecular docking.
Co-Culture and Scratch Assay
Mice induced with 4% DSS solution for 5 days were sacrificed on day 6. Peritoneal macrophages (Mϕs) were collected and cultured in Dulbecco's modified Eagle's medium. MCECs were plated in 6-well culture plates and incubated at 37 • C in a 5% CO 2 incubator. Peritoneal macrophage cell suspensions were added to the upper chamber of a Transwell insert (pore size of 0.4 µm), transferred to the 6-well culture plates and co-cultured. The co-culture system was treated with quercetin (62.5 µM). Monolayers of the MCECs were scratched and observed at 0 and 24 h following treatment. The percentage of coverage was calculated.
Statistical Analysis
Using GraphPad Prism software (GraphPad Software Inc., Avenida, CA, USA). All results are presented as means ± standard deviation from triplicate experiments. Group means were compared using Student's t-test (for normal distribution). The p values < 0.05 were recognized as statistically significant. Details of each type of statistical analysis are provided in the figure captions.
Quercetin Attenuated DSS-Induced Colitis in Mice
To investigate the effects of quercetin on colitis, we added DSS to the drinking water of BALB/c mice for 7 days, followed by water treatment for 5 days. All animal procedures and assays are shown in Figure 1a. Mice in the DSS group showed substantial weight reduction compared with untreated control mice, which was improved after administration of quercetin (Figure 1b). The total DAI of DSS-induced mice was decreased by quercetin treatment, as evaluated by weight loss, and loose and bloody stools in the DSS + quercetin group (Figure 1c). In the process of modeling and administration, we observed the mental state of mice by naked eye, and found that the mental state of mice in the DSS group was poor and flagging, while the mental state of mice in the administration group was relatively good (data not shown). We also found that quercetin reversed the DSS-induced colon shortening (p < 0.01) (Figure 1d,e). Histopathological staining with H&E revealed that DSS treatment caused severe mucosal necrosis with submucosal congestion and edema, along with significant inflammatory cell infiltration. As compared with the 5-ASA treatment, this colonic damage and inflammatory cell infiltration were significantly attenuated by quercetin treatment (Figure 1f,g), which was consistent with the amelioration of colon edema and shortening.
Quercetin Inhibited the Secretion of Inflammatory Factors in Colonic Tissues of DSS-Induced UC Mice
DSS + quercetin-treated mice showed significantly reduced secretion of IL-6, IL-1β and TNF-α in colon tissues compared with DSS-treated mice (Figure 2a-c). Western blotting results in DSS + quercetin mice showed that quercetin inhibited the expression of TNF-α, IL-6 and IL-1β protein in colonic tissues compared with DSS-treated mice (Figure 2d-g).
The Herb-Ingredient-Target Network of Quercetin
Using the TCMSP, PharmMapper and Swiss Target Prediction databases, we identified 247 action targets of quercetin, including AKT, IL-6, TNF-α and IL-1β. Construction of a quercetin-related target interaction network with Cytoscape 3.9.1 software is shown in Figure 3a. The order is based on the degree value of importance of each action target. The degree value of the target increases with darker color and greater area.
Using "ulcerative colitis" as the keyword, searches of the GeneCards and OMIM databases yielded 4825 and seven potential targets of UC, respectively. After removing the duplicate targets, the remaining potential targets were standardized for gene names in UniProt, from which a total of 2504 potential UC targets were obtained. Using Venny 2.1, the 247 quercetin action targets were mapped with the 2504 UC disease targets on a Venn diagram, which revealed 157 common drug-disease targets ( Figure 3b).
Next, the 157 common drug-disease targets were uploaded to the STRING database to build a PPI network, which included 157 nodes and 3157 edges. The topological properties of intersection target proteins were analyzed by Cytoscape software (Figure 3c), which found that the average degree of the network was about 40.2, the average betweenness was about 129.2, and the average closeness was about 0.00358. We found that there were 33 nodes, including betweenness and closeness, combined with the network diagram and topological attribute table, that were important targets of quercetin in UC (Table S1).
The 157 common drug-disease targets were also introduced into the Metascape platform for Gene Ontology (GO) biological function analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis. Taking p < 0.01 as the main screening standard, 2107 GO biological function entries were retrieved, including 1854 biological processes (BPs), 81 cellular components (CCs) and 172 molecular functions (MFs). A total of 202 signal pathways were obtained by KEGG pathway enrichment analysis (Figure 3d,e, and Table S2).
Quercetin Inhibited the Secretion of Inflammatory Factors in Colonic Tissues of DSS-induced UC Mice
DSS + quercetin-treated mice showed significantly reduced secretion of IL-6, IL-1β and TNF-α in colon tissues compared with DSS-treated mice (Figure 2a-c). Western blotting results in DSS + quercetin mice showed that quercetin inhibited the expression of TNF-α, IL-6 and IL-1β protein in colonic tissues compared with DSS-treated mice (Figure 2d-g).
The Herb-Ingredient-Target Network of Quercetin
Using the TCMSP, PharmMapper and Swiss Target Prediction databases, we ident fied 247 action targets of quercetin, including AKT, IL-6, TNF-α and IL-1β. Constructio of a quercetin-related target interaction network with Cytoscape 3.9.1 software is show in Figure 3a. The order is based on the degree value of importance of each action targe The degree value of the target increases with darker color and greater area. Using "ulcerative colitis" as the keyword, searches of the GeneCards and OMIM databases yielded 4825 and seven potential targets of UC, respectively. After removing the duplicate targets, the remaining potential targets were standardized for gene names in UniProt, from which a total of 2504 potential UC targets were obtained. Using Venny 2.1, the 247 quercetin action targets were mapped with the 2504 UC disease targets on a Venn diagram, which revealed 157 common drug-disease targets (Figure 3b).
Quercetin Molecular Docking with the Top 10 Core Target Proteins in the PPI Network
The affinity score in the molecular docking results reflects the level of binding between quercetin and the top ten core target proteins (Table S3). In general, the lower the affinity score, the more stable the binding conformation for ligand and receptor. Using AutoDock 4.2.6 software for molecular docking, we downloaded the results and related documents for quercetin and the following target proteins, taking the minimum binding energy as the reference index: AKT1 (PDB ID: 2uzs), TP53 (6ggb), TNF-α (2az5), IL-6 (1alu), VEGFA (5hhc), CASP3 (3deh), IL-1β (5r88), EGFR (2itv), MYC (6e16), and ESR1 (2qxs). The docking results indicated good binding ability between each of the ten target proteins and quercetin, with high potential biological activity (Figure 4a-j).
Fecal Microbiota Analysis
As the network pharmacological analysis revealed that quercetin had an antibacterial impact, we looked for changes in the microbiota composition. To determine the effect of quercetin on gut microbial composition, we performed 16S rDNA sequencing, which was evident from alpha and beta diversity estimation. Alpha diversity was evaluated using abundance indices (Chao1 and ACE) and diversity indices (Shannon and Simpson). The Chao1 and ACE estimates represent bacterial richness and species abundance, whereas
Fecal Microbiota Analysis
As the network pharmacological analysis revealed that quercetin had an antibacterial impact, we looked for changes in the microbiota composition. To determine the effect of quercetin on gut microbial composition, we performed 16S rDNA sequencing, which was evident from alpha and beta diversity estimation. Alpha diversity was evaluated using abundance indices (Chao1 and ACE) and diversity indices (Shannon and Simpson). The Chao1 and ACE estimates represent bacterial richness and species abundance, whereas Shannon and Simpson indices characterize the diversity of microorganisms. All sample libraries used in this study had coverage rates above 99%, indicating that the size of the library was adequate to include the vast majority of microorganisms. In all groups, the number of operational taxonomic units (OTU) reached saturation and appropriately represented the majority of species, and curve analysis including rarefaction curves and Shannon-Wiener curves was used to reflect the rationality of sample size (Figure 5a,b). The results showed that the Chao1 and ACE indexes in the quercetin group decreased compared with the model group, which indicated that the richness of species were decreased after drug administration. The Shannon and Simpson indexes were decreased by quercetin, indicating that the species diversity was decreased after drug administration (Figure 5c). Beta-diversity reflecting between-habitat diversity was calculated by unweighted unifrac. Principal Co-ordinates Analysis (PCoA) showed that the microflora of the groups were relatively in different areas, indicating that there were differences in the structure of intestinal microflora between the groups. The results suggested that the intestinal flora of mice was disturbed after modeling, and quercetin treatment could improve the intestinal flora disorder (Figure 5d). The non-parametric analysis of similarities (ANOSIM) analyses detected that the inter-group differences in community composition and abundance of the three groups were more pronounced than those within group (Figure 5e). In order to identify the bacterial groups with significant differences between the groups, linear discriminant analysis coupled with effect size measures (LEFSe) was performed. We found that compared with other groups, the abundance of bacteria including Clostridiales, Ruminococcaceae and Ruminococcus flavefaciens was the higher in control group (Figure 5f). The bacteria, including Bacteroides acidifaciens, Muribaculaceae, Blautia and the genus Lach-nospiraceae_NK4A136_group, were markedly increased in DSS-treated group, which were Bacteroidaceae, Erysipelotrichia, Oscillospirales, and Ruminococcaceae in the quercetin-treated group (Figure 5f).
Quercetin Affected the PI3K-AKT Signaling Pathway in DSS-Induced Colitis
Western blot analysis showed that treatment with quercetin halted the increased expression of PI3K and dramatically reduced the phosphorylation of AKT induced by DSS (Figure 6a-c). These results indicated that quercetin inhibited the activation of the PI3K-AKT signaling pathway to exert its anti-colitis effect.
Quercetin Suppressed Inflammation and Contributed to Mucosal Healing
To replicate the inflammatory microenvironment, we created a co-culture system using MCECs and Mϕs. Peritoneal Mϕs were extracted from DSS group mice and co-cultured with MCECs for 24 h. The concentrations of IL-6, TNF-α and IL-1β in the cell supernatants of the MCECs, as detected by ELISA assay, further suggested that quercetin significantly reduced the secretion of these inflammatory factors (Figure 7a-c).
abundance of bacteria including Clostridiales, Ruminococcaceae and Ruminococcus flavefacien was the higher in control group (Figure 5f). The bacteria, including Bacteroides acidifaciens Muribaculaceae, Blautia and the genus Lachnospiraceae_NK4A136_group, were markedly in creased in DSS-treated group, which were Bacteroidaceae, Erysipelotrichia, Oscillospirales, and Ruminococcaceae in the quercetin-treated group (Figure 5f).
Quercetin Affected the PI3K-AKT Signaling Pathway in DSS-Induced Colitis
Western blot analysis showed that treatment with quercetin halted the increased ex pression of PI3K and dramatically reduced the phosphorylation of AKT induced by DS (Figure 6a-c). These results indicated that quercetin inhibited the activation of the PI3K AKT signaling pathway to exert its anti-colitis effect.
Quercetin Suppressed Inflammation and Contributed to Mucosal Healing
To replicate the inflammatory microenvironment, we created a co-culture system us ing MCECs and Mφs. Peritoneal Mφs were extracted from DSS group mice and co-cu tured with MCECs for 24 h. The concentrations of IL-6, TNF-α and IL-1β in the cell super natants of the MCECs, as detected by ELISA assay, further suggested that quercetin sig nificantly reduced the secretion of these inflammatory factors (Figure 7a-c). In scratch experiments on the co-culture system, the capacity of MCECs to migrate was decreased in the presence of Mϕs from DSS mice, in contrast to the promotion of MCEC migration by Mϕs with quercetin-treated cells (Figure 7d-f). Western blot analysis showed that quercetin treatment significantly increased occludin expression, which was reduced in the DSS-Mϕs group compared with that in the DSS-Mϕs+Quercetin group (Figure 7g,h). These results indicated that quercetin attenuated DSS-induced downregulation of occludin to restore intestinal barrier function.
The western blot analysis of extracts of the MCECs also showed that, in the DSS-Mϕs + Quercetin group, the overexpression of PI3K was halted and the phosphorylation of AKT induced by DSS was dramatically reduced (Figure 8a-d). These results further verified that quercetin inhibited the activation of the PI3K-AKT signaling pathway to exert an anti-colitis effect in vitro. In scratch experiments on the co-culture system, the capacity of MCECs to migrate was decreased in the presence of Mφs from DSS mice, in contrast to the promotion of MCEC migration by Mφs with quercetin-treated cells (Figure 7d-f). Western blot analysis ( Figure 7g,h). These results indicated that quercetin attenuated DSS-induced downregulation of occludin to restore intestinal barrier function.
The western blot analysis of extracts of the MCECs also showed that, in the DSS-Mφs + Quercetin group, the overexpression of PI3K was halted and the phosphorylation of AKT induced by DSS was dramatically reduced (Figure 8a-d). These results further verified that quercetin inhibited the activation of the PI3K-AKT signaling pathway to exert an anti-colitis effect in vitro.
Discussion
In this study, we found that DSS-induced mice had serious inflammation and injury to colon tissues, with concomitant weight loss, bloody stools, loose stools and diarrhea, proving that the UC model was successful. All of these symptoms were improved by treatment with quercetin. Histopathological analysis indicated that DSS caused severe mucosal necrosis and submucosal edema, as well as significant inflammatory cell infiltration, all of which were significantly improved by quercetin, consistent with reduced inflammatory cell infiltration and secretion of inflammatory factors (IL-1β, TNF-α, IL-6).
Reportedly, the common flavonoid compound quercetin is the most effective scavenger of reactive oxygen species and prevents the synthesis of several pro-inflammatory
Discussion
In this study, we found that DSS-induced mice had serious inflammation and injury to colon tissues, with concomitant weight loss, bloody stools, loose stools and diarrhea, proving that the UC model was successful. All of these symptoms were improved by treatment with quercetin. Histopathological analysis indicated that DSS caused severe mucosal necrosis and submucosal edema, as well as significant inflammatory cell infiltration, all of which were significantly improved by quercetin, consistent with reduced inflammatory cell infiltration and secretion of inflammatory factors (IL-1β, TNF-α, IL-6).
Reportedly, the common flavonoid compound quercetin is the most effective scavenger of reactive oxygen species and prevents the synthesis of several pro-inflammatory substances, such as nitric oxide and TNF-α [47]. Prior to this study, the therapeutic effect of quercetin in UC had not yet been clarified, prompting us to perform a network pharmacological analysis of quercetin. A PPI topological analysis of 157 intersection genes revealed 33 strongly associated proteins. The results of molecular docking also verified that quercetin has superior affinities for the target genes ESR1, IL-1β, TNF-α, IL-6, TP-53, VEGFA, CASP3, EGFR, MYC and AKT1, and quercetin may exert powerful anticancer and anti-inflammatory effects via regulation of these targets.
The KEGG enrichment analysis of the quercetin-UC targets indicated several inflammation-related pathways: the IL-17, Toll-like receptor, PI3K/Akt, TNF, MAPK, NF-kappa B, NOD-like receptor, and JAK-STAT signaling pathways, T helper cell 17 differ-entiation, and inflammatory mediator regulation of transient receptor potential channels. The PI3K/AKT signaling pathway is recognized to be crucially important in inflammatory illnesses, especially IBD [10]. Quercetin has a role to play in the treatment of UC via inhibition of the PI3K/AKT signaling pathway, and its mechanism of action is shown in Figure 9. Upon activation of PI3K by multiple upstream cell surface receptors, type I PI3K catalyzes phosphatidylinositol 4,5-bisphosphate phosphorylation at the D3 position of the inositol ring to generate the second messenger phosphatidylinositol 3,4,5trisphosphate (PIP3), which in turn activates PKB/AKT [7,48]. AKT and the upstream 3-phosphatidylinositol-dependent protein kinase-1 (PDK1) interacts with PIP3 through the pleckstrin-homology structural domain in PI3K and activates internal Thr308 site phosphorylation via PDK1 [49][50][51]. Upon activation of the PI3K/AKT pathway, IκBα is phosphorylated by IκB kinases (IKK) and then degraded by ubiquitin-mediated proteolysis, which promoted the phosphorylation and nuclear translocation of NF-κB p65 and further activated the expression of downstream inflammatory mediators [52][53][54][55]. In healthy colon tissues, IL-1β, TNF-α and IL-6 are expressed at low levels, but they are activated and upregulated during inflammation. Our western blotting results showed that quercetin inhibited the PI3K/AKT signaling pathway to exert anti-inflammatory effects, which validated the KEGG enrichment results. Meanwhile, it effectively enhanced the expression of occludin and lowered the expression of IL-1β, TNF-α and IL-6. Our in vitro experiments further demonstrated that quercetin could promote mucosal healing and inhibit the secretion of inflammatory factors as well as the PI3K/AKT signaling pathway. macological analysis of quercetin. A PPI topological analysis of 157 intersection genes revealed 33 strongly associated proteins. The results of molecular docking also verified that quercetin has superior affinities for the target genes ESR1, IL-1β, TNF-α, IL-6, TP-53, VEGFA, CASP3, EGFR, MYC and AKT1, and quercetin may exert powerful anticancer and anti-inflammatory effects via regulation of these targets.
The KEGG enrichment analysis of the quercetin-UC targets indicated several inflammation-related pathways: the IL-17, Toll-like receptor, PI3K/Akt, TNF, MAPK, NF-kappa B, NOD-like receptor, and JAK-STAT signaling pathways, T helper cell 17 differentiation, and inflammatory mediator regulation of transient receptor potential channels. The PI3K/AKT signaling pathway is recognized to be crucially important in inflammatory illnesses, especially IBD [10]. Quercetin has a role to play in the treatment of UC via inhibition of the PI3K/AKT signaling pathway, and its mechanism of action is shown in Figure 9. Upon activation of PI3K by multiple upstream cell surface receptors, type I PI3K catalyzes phosphatidylinositol 4,5-bisphosphate phosphorylation at the D3 position of the inositol ring to generate the second messenger phosphatidylinositol 3,4,5-trisphosphate (PIP3), which in turn activates PKB/AKT [7,48]. AKT and the upstream 3-phosphatidylinositol-dependent protein kinase-1 (PDK1) interacts with PIP3 through the pleckstrin-homology structural domain in PI3K and activates internal Thr308 site phosphorylation via PDK1 [49][50][51]. Upon activation of the PI3K/AKT pathway, IκBα is phosphorylated by IκB kinases (IKK) and then degraded by ubiquitin-mediated proteolysis, which promoted the phosphorylation and nuclear translocation of NF-κB p65 and further activated the expression of downstream inflammatory mediators [52][53][54][55]. In healthy colon tissues, IL-1β, TNFα and IL-6 are expressed at low levels, but they are activated and upregulated during inflammation. Our western blotting results showed that quercetin inhibited the PI3K/AKT signaling pathway to exert anti-inflammatory effects, which validated the KEGG enrichment results. Meanwhile, it effectively enhanced the expression of occludin and lowered the expression of IL-1β, TNF-α and IL-6. Our in vitro experiments further demonstrated that quercetin could promote mucosal healing and inhibit the secretion of inflammatory factors as well as the PI3K/AKT signaling pathway. The composition of the human gut microbiota is linked to health and disease. Dysbiosis reflects a change in the balance of the makeup of the gut microbiota, and increases the risk of developing IBDs including Crohn's disease and UC [56]. Bacteroidete, the dominant flora in the colon, has attracted considerable attention [57]. It is reported that the relative abundance of Bacteroides in IBD patients is markedly lower than that in healthy participants [58,59]. A number of studies have shown that the abundant species of the common Bacteroidetes, including Bacteroides vulgatus and other key bacteroidetes, are beneficial to the recovery of intestinal health in patients with IBD, showing potential therapeutic potential [60,61]. In addition, Erysipelotrichia, Erysipelotrichales, Erysipelotrichaceae, Oscillospirales and Ruminococcaceae can produce SCFAs to protect the gut from damage and reduce the degree of colonic inflammatory injury, and the decrease in their relative abundance can lead to gastrointestinal disorders [62][63][64][65][66][67]. The results showed that the relative abundance of Bacteroidaceae, Erysipelotrichia, Oscillospirales, and Ruminococcaceae were significantly increased in the quercetin-treated group. Previous studies suggested that the relative abundance of Lachnospiraceae and Lachnospiraceae_NK4A136_group was significantly increased in colitis mice [68][69][70][71], which was consistent with our results. The results taken together indicated that quercetin effectively prevented the development and progression of experimental colitis by altering the composition of gut microbiota by increasing the abundance of beneficial bacteria and reducing the abundance of harmful bacteria.
The immune dysfunction of macrophage-driven intestinal microenvironment plays a crucial role in the pathological mechanism of UC; Mϕs are highly plastic antigen-presenting cells that link the innate and adaptive immune systems, and macrophages can polarize into M1 type and M2 type with different functions in specific microenvironment. M1 type is an inflammatory type that releases ILs to stimulate inflammatory response, M2 type plays an anti-inflammatory role and can promote wound healing [72,73]. Animal lifeforms depend heavily on epithelial and/or endothelial barriers. An essential part of these barriers is the tight junction, of which occludin is a critical component [74]. To simulate the inflammatory environment surrounding epithelial cells, we established a coculture system of MCECs and peritoneal Mϕs extracted from DSS group mice for scratch assays. Peritoneal Mϕs were extracted from DSS group mice and co-cultured with MCECs for 24 h. The DSS-Mϕs + quercetin group was treated with quercetin (62.5 µM) on the basis of the DSS-Mϕs group, and a blank control group of MCECs was not co-cultured with Mϕs. While the DSS-Mϕs group inhibited the migration of MCECs, no such effect was seen in the DSS-Mϕs + quercetin group. It has been proved that quercetin can inhibit inflammatory reaction and promote wound healing by promoting the transformation of macrophages from M1 phenotype to M2 phenotype [75]. Therefore, we speculate that quercetin may promote M1 Mϕs to M2 or impede the transition to M1 Mϕs, thereby reducing the level of proinflammatory ILs in DSS induced colitis mice and promoting mucosal healing.
Conclusions
The unclear etiology and pathogenesis of UC have created urgency in the search for new and effective treatments. Our study substantiates a role for quercetin in the treatment of UC via inhibition of PI3K/AKT signaling, restoration of the intestinal barrier, and regulation of the gut microbiota, with no obvious tissue damage or side effects in mice. We propose that quercetin might be a feasible treatment option for UC and could be developed as a new therapeutic agent.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/molecules28010146/s1, Table S1: topological analysis results of main target network; Table S2: the enrichment pathways corresponding to intersection genes; Table S3: the affinity scores of quercetin with the top ten targets.
Informed Consent Statement: Not applicable.
Data Availability Statement: The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found at: https://www. ncbi.nlm.nih.gov/ (accessed on 23 September 2022), PRJNA881733. | 7,102.2 | 2022-12-24T00:00:00.000 | [
"Biology",
"Medicine"
] |
Dissipativity and Passivity analysis of neural networks with mixed-time-varying delays
This paper focuses on the problem of Dissipativity and Passivity analysis of NNs with mixed-time-varying delays. By employing Lyapunov functional approach, some sufficient conditions are derived to guarantee that the considered NNs are strictly ( Q , S , R ) − γ -Dissipative and Passivity. Based on Lyapunov stability theory, proper Lyapunov-Krasovskii functional (LKF) with some new terms is constructed, and estimating their derivative by using newly developed single integral inequality that includes Jensens inequality which can be easily checked by applying MATLAB LMT toolkit. Three Numerical example is finally provided to demonstrate the effectiveness and advantages of the proposed method.
Introduction
During the past decades, NNs have been attracted many researchers attention for their extensive successful applications in several areas, such as associative memory, static imagine processing, combinatorial optimization, signal processing and pattern recognition [1]- [3]. Moreover, all these applications deeply depend on characteristics behavior of the dynamical system. It is well-known concept, the stability analysis is a fundamental property of dynamical system [4], because the unstable system there no practical application sense. In dynamical systems, the existing time delay may cause unstable, poor performance, and oscillation of the behaviors of the system [12]. Therefore, most researchers has mainly focused on to find the maximum delay upper bounds for the problem of NNs with time delay. Hence, the study of NNs with time delay gained considerable attention in the last few decades [1]- [15].
On the other hand, NNs character can be identify based on system performance, As we know as, in recent years, significant efforts have been paid to the issues of passivity analysis, which is well established based on the circuit theory model. The passivity system is acting as one of the most efficient tools for studying the stability analysis of NNs, nonlinear control model, especially for the higher-order systems. The concept of passivity as a part of the general theory of dissipative systems, it has been found in many applications in the different areas such as stability, complexity, chaos control, synchronization and so on. Thus, the concept becomes one of the most critical areas of research and receives a great deal of devotion on the researcher society [22]- [30]. For example, in work [24], delay-dependent passivity criterion was achieved by applying integral inequality methods for uncertain continuous-time delayed NNs. The delay-independent passivity of NNs was established in the literature [30] The structure of this paper as follows. In Section 2, some necessary assumptions, definitions, and lemmas are given. The main results of this article are presented in Section 3. In Section 4, three numerical cases with simulation are verified. Finally, the general conclusions are reported in Section 5.
Notations: Throughout this paper, the superscripts D −1 and D T stand for the inverse and transpose of matrix D, respectively. R n denotes the ndimensional Euclidean space, R n×m is the set of all n × m real matrices. A real symmetric matrix P 1 > 0, (P 1 ≥ 0) (P 1 < 0) denotes P being a positive definite (positive semi-definite) and (negative definite) matrix, respectively. The symmetric terms in a symmetric matrix are denoted by * . I is an appropriately dimensioned identity matrix.
Problem formulation and preliminaries
In this paper, we consider the following NNs with mixed time-varying delays where z(t) = [z 1 (t), ....z n (t)] T ∈ R n is the neuron state vector, g(z(t)) = [g(z 1 (t)), ..., g(z n (t))] T ∈ R n denotes the neuron activation function and ω(t) is the noise input vector, which belonging to L 2 [0, ∞). D = diag{d 1 , , ..., d n } is a positive diagonal matrix with d i > 0, i = 1, 2, ..., n and A ∈ R n , B ∈ R n and C ∈ R n are connection weight matrix, discrete delayed connection weight matrix and distributed delayed connection weight matrix, respectively. ϕ(t) is assumed to be continuously differentiable on [−δ, 0], where δ = max{d, τ }. Assumption 1: The delays d(t) and τ (t) are continuous time-varying delay components its satisfies where d, τ, µ are being real constants. and l + i , such that where a 1 , a 2 ∈ R, a 2 = a 2 Definition 2.1 [12] The system (1) is said to be strictly (Q, S, R) − γ-dissipative if, for γ > 0, the following inequality holds under zero initial condition.
Remark 2.2
The property of dissipativity, let us define an energy supply function as follow: where mathcalQ, S and R are real matrices with
Definition 2.3 [12]
The system (1) is called passive if there exists a scalar γ ≥ 0 such that for all t p ≥ 0 under the zero initial condition.
Lemma 2.4 [15] For any constant matrix U ∈ C n×n and U > 0, scalars d M > d m > 0, such that the following integration is well defined, then Lemma 2.5 [15] For any constant matrices X ∈ R n×n and positive matrix R ∈ R n×n , R X such that the following integrations are well defined, then
Main Results
In this section, we first present delay-dependent global asymptotic stability criteria for NNs with mixed delays. For simplicity, we denote the matrix and vector representation, e i ∈ R 8n×n (i = 1, 2, ..., 8) are defined as block entry matrices (for example e 4 = [0 n , 0 n , 0 n , I n , 0 n , 0 n , 0 n , 0 n ] T ).
On the other hand, for any matrices G 1 and G 2 with appropriate dimensions, it is true that, g(z(s))ds .
(16)
Furthermore, from (3), for the diagonal matrix, N 1 , N 2 , we can achieve the following inequalities − 2g T (z(t))N 1 g(z(t)) + 2z From (9)-(18), we can getV If Ω < 0, thenV(t) < 0. This means that the system (1) is globally asymptotically stable . The proof is completed The other notations are defined as Theorem 3.2 For given scalars d, τ and µ system (1) with ω(t) = 0 is strictly (Q, S, R) − γ− dissipativity, if there exist symmetric positive definite matrices P 1 , P 2 , P 3 , P 4 , P 5 , positive diagonal matrix matrix Λ 1 , Λ 2 , N 1 , N 2 , and any matrix H, G 1 , G 2 and positive scalar γ such that the following LMI holds: where and Ω is defined the same as in Theorem (3.1). Proof: To show the dissipativity, we choose the same LKF and define the following performance index for NNs (1) Following the proof of Theorem (3.1), we get where Under the zero initial condition, we can conclude that (4) holds, which means NNs (1) is strictly (Q, S, R) − γ− dissipative. This complete the proof.
Theorem 3.3 For given scalars d, τ and µ system (1) passive, if there exist symmetric positive definite matrices P 1 , P 2 , P 3 , P 4 , P 5 , positive diagonal matrix matrix Λ 1 , Λ 2 , N 1 , N 2 , and any matrix H, G 1 , G 2 and positive scalar γ such that the following LMI holds: Proof: To show the passivity, we choose the same LKF and define the following performance index for NNs (1) From (8)- (19), we can geṫ By integrating (25) with respect to t over the time period from 0 to t p , we know that under zero initial conditions If Ω < 0, thenV(t) < 0. This means that the system (1) with discrete time-varying delay is passive in the sense of Definition 2.3. This completes the proof.
Numerical example
In this section, we give an illustrative example to demonstrate the less conservatism of our result and the effectiveness of the proposed method. Therefore, the concerned neural networks with time-varying delays is globally asymptotic stable. Therefore, the concerned neural networks with time-varying delays is passivity.
Conclusion
This paper focuses on the problem of Dissipativity and Passivity analysis of NNs with mixed-time-varying delays. By employing Lyapunov functional approach, some sufficient conditions are derived to guarantee that the considered NNs are strictly (Q, S, R) − γ-Dissipative and Passivity.Based on Lyapunov stability theory, proper Lyapunov-Krasovskii functional (LKF) with some new terms is constructed, and estimating their derivative by using newly developed single integral inequality that includes Jensens inequality which can be easily checked by applying MATLAB LMT toolkit. Three Numerical example is finally provided to demonstrate the effectiveness and advantages of the proposed method. | 2,049.6 | 2019-12-30T00:00:00.000 | [
"Mathematics",
"Engineering"
] |
Torque teno virus dynamics during the first year of life
Torque teno virus is a small chronically persisting circular negative ssDNA virus reaching near 100% prevalence. It is reported to be a marker for immune function in immunocompromised patients. The possibility of vertical maternal-fetal transmission remains controversial but incidence rate of TTV DNA in children increased with age. TTV dynamics well studied for allogeneic hematopoietic stem cell transplantation as a predictor of post-transplant complications but there is no viral proliferation kinetics data for other patient groups or healthy individuals. The aim of this study was to determine TTV dynamics during the first year of life of healthy infants. Ninety eight clinically healthy breastfeeding infants (1–12 months of age) were analyzed by quantitative PCR for the whole blood TTV load with the test sensitivity of about 1000 viral copies per milliliter of blood (total number of samples including repeatedly tested infants was 109). 67% of all analyzed samples were TTV-positive demonstrating significant positive correlation between age and TTV load (r = 0.81, p < 0.01). This is the first study to suggest that viral load increases during the first year of life reaching a plateau after 6 months with strong proliferation for the first 60 days. Our data well correlates with TTV dynamics in patients following allogeneic hematopoietic stem cell transplantation.
Keywords: Torque teno virus, Transfusion-transmitted virus, TTV, Viral load dynamics, Neonatal period, Infants, TORCH infections Background Torque teno virus (TTV) is a small chronically persisting circular negative ssDNA virus reaching near 100% prevalence [1,2]. TTV is transmitted in all ways including contact and respiratory [3]. It was suggested that presence of TTV can cause several diseases such as acute respiratory diseases [4], liver diseases [5,6] and cancer [7], but this data did not have any convincing support. It is reported to be a marker for immune function in immunocompromised patients [8].
TTV dynamics well studied for allogeneic hematopoietic stem cell transplantation as a predictor of post-transplant complications [21,22]. But there is no viral proliferation kinetics data for other patient groups or healthy individuals.
The aim of this study was to determine TTV dynamics during the first year of life of healthy breastfeeding infants.
Patients and blood samples collection
This prospective single-center study included 98 clinically healthy breastfeeding infants (1-12 months of age, number per month as 9; 6; 13; 8; 11; 14; 6; 9; 10; 6; 4; 2 accordingly). 10 infants were tested repeatedly (2 or 3 times), so the total number of samples was 109. The exclusion criteria were as follows: any infectious or genetic disease, any immunological deviations, voluntary refusal of research. Two separate aliquots of each capillary blood sample were collected into Microvette 200 K3EDTA (Sarstedt, Germany) between June 2017 and January 2018 at the Kulakov National Medical Research Center for Obstetrics, Gynecology, and Perinatology (Moscow, Russia). Samples were stored at − 20°C for 1-7 days until DNA extraction.
DNA extraction
DNA was extracted from 50 μl aliquots of thawed whole blood using a standard commercial silica-sorbent kit for DNA extraction from body fluids (Probe-GS DNA Extraction Kit, DNA-Technology, Russia). To prevent exogenous contamination, DNA isolation was performed in a separate DNA extraction room (Zone 1). To prevent cross-contamination of the samples, all procedures were carried out in the UV-equipped PCR-box using sterile disposable tubes and aerosol-resistant tips.
TTV quantification
qPCR was performed using the DTprime Real-Time PCR Cycler (DNA-Technology, Russia) as described in [1], with the test sensitivity of about 1000 viral copies per milliliter of blood. qPCR of the unique human genome fragment (in a separate PCR tube) was used as DNA extraction control. To prevent PCR contamination by previous reactions or biological samples, the reactions were combined using aerosol-resistant tips in UV-equipped PCR-box in a separate PCR-preparation room (Zone 2). [20]. Numbers correspond to sample size Also, no electrophoresis of TTV PCR products or other procedures that would require PCR-tube opening were performed in the building. All the negative controls and surface washings were negative.
Data analysis
qPCR data were analyzed using the DTprime Real-Time PCR Cycler Software v.7.7 (DNA-Technology, Russia). Microsoft Office Excel 2016 (Microsoft Corporation, USA) and GraphPad Prism 6 (GraphPad Software, USA) were used for statistical analysis.
Results
TTV whole blood viral load was quantified for 98 infants at 1-12 months of age (see. Fig. 1). Because of logistic difficulties only 10 infants were tested repeatedly (2 or 3 times) (blue lines at Fig. 1) and only several mother-child pairs were examined for TTV load (data not shown). No breast milk samples were tested.
For 10 repeatedly tested infants 3 did not show TTV for both tests, 1 was unchanged and other 6 show more TTV in a second analysis (see Fig. 1).
Discussions
Despite more than twenty years of TTV research, the routes of mother-to-child transmission have not been fully elucidated and the possibility of transplacental TTV transmission remains controversial [9][10][11][12][13][14][15][16][17][18][19][20]. Some authors demonstrate the absence of TTV in cord blood or baby blood after delivery [10,13,20], but other show 13.8-48.1% of TTV-DNA positive cord blood samples [9,11,14,18]. Such differences can be explained by the low sensitivity of PCR (for studies where the virus did not detected) or by PCR-product contamination (since the cord blood TTV was usually detected by contamination-friendly Nested-PCR technique). In any case, it can be argued that even if the virus passes the transplacental barrier, the cord blood viral concentration is very low and does not depend on the mothers TTV load [20].
The major site of TTV replication is lymphocytes [23][24][25] and the whole blood TTV load approximately 100 times higher than plasma samples [20]. Consequently, we decided to measure TTV load in the whole blood (instead of plasma or serum) to get more sensitive approach.
Bagaglio et al. and Komatsu et al. demonstrate increasing of the percent of TTV DNA positive infants during the first months of life by serum analysis [15,19]. Our whole blood results correlates with previous serum data, expectantly showing a greater percentage of positive samples (see Fig. 2).
Breast milk is often positive for TTV (23.3-67.3%) [13,14,26] and it has been suggested to be one of the major routes of Torque teno virus transmission for babies. So, the newborn TTV progression may be a consequence of the mother's breast milk TTV or immune system changes during the neonatal period.
Conclusions
This is the first study to suggest that TTV viral load increases during the first months of healthy infants development reaching a plateau after 3-6 months with strong proliferation for the first 60 days. Fast viral load increasing correlates with previous data on TTV DNA prevalence. Also, neonatal TTV dynamics is similar to TTV proliferation in patients following allogeneic hematopoietic stem cell transplantation, demonstrating the possible similarity of intracellular mechanisms of viral progression.
Abbreviations PCR: Polymerase chain reaction; qPCR: Quantitative polymerase chain reaction; ssDNA: Single stranded deoxyribonucleic acid; TTV: Torque teno virus; UV: Ultraviolet Authors' contributions EAT and ASR conducted molecular studies and drafted the manuscript, AVD performed the sampling, DVR and GTS designed the study and drafted the manuscript. All authors read and approved the final version of the manuscript.
Ethics approval and consent to participate
The study protocol was reviewed and approved by the Ethics Committee of the Pirogov Russian National Research Medical University (Protocol No.2017/23); the study was conducted in accordance with the Declaration of Helsinki. All participants (children's parents) provided written informed consent. | 1,717.8 | 2018-05-30T00:00:00.000 | [
"Medicine",
"Biology"
] |
The Dynamic Analysis of Two-Rotor Three-Bearing System
A finite element model considering the shear effect and gyroscopic effect is developed to study the linear and nonlinear dynamic behavior of two-rotor three-bearing system named N+1 configuration with rub-impact in this paper. The influence of rotational speed, eccentric condition, and the stiffness of coupling on the dynamic behavior of N+1 configuration and the propagation of motion are discussed in detail. The linear rotordynamic analysis included an evaluation of rotor critical speed and unbalance response. The results show that the critical speed and unbalance response of rotors are sensitive to coupling stiffness in N+1 configuration. In the nonlinear analysis, bifurcation diagram, shaft-center trajectory, amplitude spectrum, and Poincaré map are used to analyze the dynamic behavior of the system.The results of the research transpire that these parameters have the great effects on the dynamic behavior of the system. The response of the system with rub-impact shows abundant nonlinear phenomena. The system will exhibit synchronous periodic motion, multiperiodic motion, quasiperiodic motion, and chaotic motion patterns under rotor-stator rub interaction conditions.The dynamic response is more complicated for flexible coupling and twomass eccentricities than that of system with rigid coupling and one mass eccentricity.
Introduction
Ultra supercritical steam turbine technology can meet notably the requirements for high efficiencies to reduce both fuel costs and emissions as well as for a reliable supply of electric energy at low cost [1,2].A typical ultra supercritical turbo-set comprises three separate turbine modules operating at different pressure and temperature levels.These modules are the high pressure turbine (HP), intermediate pressure turbine (IP), and, depending on the cooling water conditions, one, two, or three low pressure turbines (LP).The generator is directly coupled to the last LP turbine.There are two configurations for the shafting supporting structure of ultra supercritical steam turbine unit.In the first configuration named 2N configuration, the HP rotor, IP rotor, and LP rotors are supported by two journal bearings, respectively.In the second configuration, the HP rotor is supported by two journal bearings, and IP rotor and LP rotors are supported by only one journal bearing, respectively.Rotors are coupled through mechanical couplings.The configuration where rotors are supported by +1 bearings is known as N+1 configuration.N+1 configuration has many advantages.For example, compact shafting structure and small bearing span exist in N+1 configuration to minimize foundation deformation for the bearing load and the alignment of shafting.As a consequence, the ultra supercritical unit with N+1 configuration is easy to install and maintain, and the investment of ultra supercritical power plant can significantly be reduced.Due to the specific supporting structure of N+1 configuration, however, rotor vibration is more greatly affected by adjacent rotor in the ultra supercritical turbo-set with N+1 configuration than with 2N configuration.It is very difficult to analyze and diagnose the vibration fault due to the decrease in the numbers of vibration monitoring locations for the ultra supercritical turbo-set with N+1 configuration.In addition, unstable rotor vibration was caused easily in N+1 configuration.The research about rotor vibration of the ultra supercritical turbo-set with N+1 configuration is still in the preliminary and exploratory stage at present.
Most large rotating machinery consists of multiple rotors, which are coupled through mechanical couplings.There are many types of industrial coupling some of which are rigid, gear, and flexible type.The couplings' function is to transmit torque from the driver to the driven rotor.Coupling plays an important role in turbomachinery.Pennacchi et al. [3] propose a complete and original method to simulate 2 International Journal of Rotating Machinery the behavior of real shaft line with rigid coupling and analyze nonlinear character of the system.A rigid mechanical coupling model and a flexible coupling model are built to examine the lateral and torsional responses of two rotating shafts, respectively, with theoretical and numerical analysis in [4,5].Redmond [6] presented a model which enables dynamic analysis of flexibly coupled misaligned shafts and investigated the influence of coupling-stiffness anisotropy, bearing nonlinearity, mass unbalance, and static torquetransmission effects.
In turbomachinery, such as ultra supercritical turbo-set with N+1 configuration, the requirement of high efficiency has led to reduced clearances between rotating and stator components.With tighter clearances, rotor-stator rub interaction faults are prominent in high performance turbomachinery operations.Rotor physical contact with a stationary element of rotating machine and the subsequent rubbing at the contact area, which involves several major physical phenomena such as sliding friction, impact, and modification of the system stiffness, contribute to the potentially serious malfunctions in rotating machinery that may lead to machine failure.The malfunction may be mild under light rub or complete destruction of the machine under severe rub [7].Light partial rubs and full annular rubs would cause major changes in the rotor normally synchronous vibrations.Heavy rubs may cause significant impacts as well as subsynchronous and supersynchronous vibrations of the shaft.The system vibration often contains some very complicated phenomena including not only periodic motion but also chaotic motion.
The nonlinear dynamic behavior and stability of rotorbearing system subjected to rub-impact have been studied by scholars and engineers for some time.Different types of fault symptoms have been reported by Muszynska [8] and Ahmad [9] who give a detailed literature review about this complex physical phenomenon.The majority of published works have been focused on the global dynamic performance of rotor-bearing system by using different mathematical models.Khanlo et al. [10] studied the chaotic vibration of a rotating flexible continuous shaft-disk system with rubimpact including the Coriolis and centrifugal effects.The results show that the effect of centrifugal force was greater than that of the Coriolis force on the occurrence of the rub-impact, and the rub-impact would occur at lower speed ratios under the influence of the Coriolis and centrifugal forces.Torkhani et al. [11] presented a model suitable to be employed for rotor system under light, medium, and heavy partial rubs and the corresponding predictions are compared with the experimental results based on the big test rig supported by elliptical bearing.Chu and Lu [12] investigated experimentally the nonlinear dynamic vibration of some different rotor-bearing systems including one and two rotors with single-and multi-disks.The experiment results show that the system motion generally contains the multiple harmonic components and fractional harmonic components.Shen et al. [13] studied the dynamic behavior of a rub-impact rotor-bearing system with initial permanent bow.
Based on the short bearing theory, the nonlinear oil film forces from the journal bearing are obtained.The influence of rotating speeds, initial permanent bow lengths, and phase angles on the response of system is investigated.Wang et al. [14] investigated the nonlinear dynamic characteristics of a single disk rotor system supported by oil film journal bearings using lumped-mass model.The effects of rotating speed and damping ratio on dynamic behavior of system are discussed.The results reveal a complex dynamic behavior comprising periodic, multiperiodic, chaotic, and quasiperiodic response.Zhang and Wang [15] investigated the effects of phase difference between disks on rotor dynamic response, highlighting field balancing of the rotor as a possible solution in rub-impact mitigation.Roques et al. [16] introduced a rotor-stator model of a turbo generator in order to investigate speed transients with rotor-diaphragm rubbing caused by an accidental blade-off imbalance.The highly nonlinear equations due to contact conditions are solved through an explicit prediction, correction time-marching procedure combined with the Lagrange multiplier approach dealing with a node-to-line contact strategy.The results highlight the significant role of the friction coefficient together with the diaphragm modeling, from rigid to fully flexible, in the interaction phenomenon.Based on finite element theory with contact analysis, Behzad et al. [17] developed an algorithm to investigate the rotor-to-stator partial rubbing vibration and obtained the responses of the partial rubbing for different rotational speed.In addition, the sensitivity of the initial clearance, the stator stiffness, the damping parameter, and the coefficient of friction is analyzed.Ma et al. [18] investigated numerically and experimentally the fault characteristics of a two-disk rotor system with a point-point contact rubbing model between a disc and an elastic rod.
From the investigations cited above, it is known that much work has been done, respectively, on the research of the single span rotor system with fewer disks and rubimpact.Based on authors' best knowledge, few publications have considered the nonlinear dynamic behavior of rotorbearing system with N+1 configuration subjected to rubimpact.In this paper, the dynamic model of two-rotor threebearing system with rub-impact is established using finite element method.The effect of bearings is expressed in form of stiffness and damping coefficients which are varying with rotating speed.Attention is paid to the influence of rotational speed, eccentric condition, and the stiffness of coupling on the dynamic behavior of two-rotor three-bearing system and the propagation of motion.Bifurcation diagram, shaft-centre trajectory, amplitude spectrum, and Poincaré map are used to analyze the dynamic behavior of the system.
Physical Model and Governing Equations
In the right side of the equation, the first one is the sliding velocity term that gives rise to the bearing stiffness coefficients and the second one is the squeeze-film velocity term that gives rise to the bearing damping coefficients. = (, ), ℎ = ℎ(, ), 0 ≤ ≤ 2, −/2 ≤ ≤ /2, and denotes viscosity.Also, (, ) and ℎ(, ) are the film pressure distribution and the film thickness distribution, respectively.And is the hydrodynamic-active axial length of the journal bearing.Here only (, ) is the "unknown" pressure and all other parameters are specified.By solving this equation and integrating (, ) over the journal cylindrical surface, the bearing forces in and direction can be obtained as given in ( The corresponding eight stiffness and damping coefficients are compactly expressed using subscript notation, as given in (3).Here, and denote coordinate directions: In most of the previous papers, (1) was solved by neglecting either the axial pressure flow term ("long bearing" solution) or the circumferential pressure flow term ("short bearing" solution).With either approximation, the RLE is reduced to an ordinary differential equation.These two approximate solutions provide an upper bound and lower bound on bearing forces, respectively.In this paper, the bearing stiffness is based on a full 2D numerical solution algorithm.The rotor system is supported by elliptical bearings.Parameters of bearing used in this paper are listed in Table 1 and the eight coefficients of the type of bearing were evaluated and plotted versus operating speed in Figure 1.
Rub-Impact Force.
The rub-impact is assumed to take place at the large disk.There is an initial clearance of between the disk and stator.The rubbing model is shown in Figure 2. Without considering thermal effect from friction, the relationship between radial rotor displacement and contact force at disk 2 can be defined as where = √ 2 + 2 is the radial displacement. and are the displacement components of the geometric centre of the disk in and directions. is the friction coefficient between disk and stator. is the radial stiffness of the stator.The components of the rub-impact forces in and directions can be expressed as (5)
Dynamic Governing Equation.
The finite element method is used to establish the dynamic governing equation of rotors system based on the Timoshenko beam theory including the shear effect, gyroscopic effect, and transverse torsion in this paper.A two-rotor three-bearing system is considered.The rotor-bearing system that includes two shafts and four disks is supported by three journal bearings.The left rotor including two disks is supported by two elliptical journal bearings, and the right rotor including two disks is supported by one elliptical journal bearing.Two shafts are coupled through a mechanical coupling.Very often coupling is modeled as an elastic component with isotropic translational stiffness and rotational stiffness between two stations.The system is discretized by use of beam elements as shown in Figure 3.The left rotor is 970 mm in length and the right rotor is 835 mm.The two-rotor diameter is 50 mm with the two big disks having a diameter of 270 mm and a length of 55 mm and the two small disks having a diameter of 123 mm and a length of 70 mm.The length is broken down into 41 Timoshenko beam elements in the finite element model.Bearings are located at node 6, node 20, and node 39, respectively.Every node has 4 degrees of freedom, two translational and two rotational: and are the translation freedoms in and directions, and and denote the rotation angles of the cross section around axes and , respectively.The whole nodal displacements can be expressed as = ( ) , = 1, 2, . . ., 42.By integrating all the local matrices of shaft elements, disks, bearings, and unbalances, the dynamic behavior of the whole rotors system is described as the following second order differential equation: where , , and are the mass matrix, stiffness matrix, and damping matrix due to structure and bearings, respectively. is the gyroscopic matrix of the rotors system with the angular velocity . is the load vector, which is composed of three parts, that is, the unbalance forces, rotor-stator contact forces, and gravity force.The damping matrix includes Rayleigh damping matrix and gyroscopic matrix.The Newmark method is used to solve the transient differential equations.The chosen time step is 2/( * 300) s, so the convergence of the quantities of interest with respect to the time step is achieved.
Numerical Calculation and Discussion
In this paper, the main object is to evaluate the effects of rotational speed, eccentric condition, and the stiffness of coupling on the dynamic behavior of rotor-bearing system with N+1 configuration and the propagation of motion as it related to a rub-impact condition.The radial stiffness of stator is 4.57 N/m.The clearance and friction coefficient between disk and stator are 0.06 mm and 0.1, respectively.The speed range considered was from 300 to 15,000 rpm.
Critical Speed and Mode.
To evaluate the effects of the coupling parameters on the dynamic response of the rotorbearing system with N+1 configuration, critical speed and mode for different values of coupling stiffness were analyzed in the rotor dynamic model.To begin the analysis, a critical speed map of the system was developed as shown in Figure 4.The first, third, 4th, and 5th critical speeds are more sensitive to coupling parameter when the value of coupling stiffness is less than 1.07 N/m and elevated with the increasing of coupling stiffness.While the value of coupling stiffness is greater than 1.07 N/m, the change about the first, third, 4th, and 5th critical speeds is not obvious, and the damped critical speed tends to be stable.With the change of coupling stiffness, however, the value of the second critical speed of the system almost was constant.The second critical speed of the system was not sensitive to coupling property in that it was the first critical speed of the left rotor supported by two bearings.
Figure 5 shows the mode shapes for the first five synchronous critical speeds of the system when the value of coupling stiffness is 1.06 N/m and 1.09 N/m, respectively.The first mode shape of the system is corresponding to the first critical speed of right rotor.The second, the third, the fourth, and the fifth mode shapes of the system are corresponding to the first critical speed of left rotor, the second critical speed of the right rotor, the second critical speed of the left rotor, and the third critical speed of the right rotor, respectively.
Effect of Rotor Unbalance on the Dynamic Response.
The rotors always have some amount of residual unbalance no matter how well they are balanced.These residual unbalance forces are discrete and located at different planes with different amplitudes.18 cases were analyzed in the rotor dynamic model of rotor-bearing system with N+1 configuration (Table 2).The parameters chosen for evaluation include the location of unbalance, the amplitude of unbalance, and the stiffness of coupling.Unbalance forces with different amplitude were applied at the two big disk locations, respectively, and both two big disk locations to excite the rotor in a manner that would produce a rub-impact condition.The linear rotor dynamic analysis was then performed for the 18 cases in order to evaluate the effect of the coupling characteristics on the dynamic response of the system.
As shown in Figures 6-8, two big disks radial displacements of a set of cases are considered.The displacements rapidly increase around 4500-5500 rpm as the rotor passes through the first critical speed.The displacement of big disk with unbalance is larger than that of another big disk without unbalance.The greater the amplitude of unbalance is, the greater the displacement of disk is.The unbalance response of two disks, shown in Figures 9-11, was sensitive to coupling stiffness when the amplitude of unbalance was constant.For the left big disk with unbalance, the unbalance response of left big disk was not sensitive to coupling stiffness in that the left rotor is supported by two bearings.When the right big disk is unbalanced, the response of right big disk was very sensitive to coupling property.The first critical speed excited by unbalance force increases with the increase of coupling stiffness for the right big disk.
Effect of Rotational Speed on the Dynamic Response.
Firstly, the effect of rotational speed on the dynamic response of the system is discussed in this section because the speed is one of the most important parameters of a rotor-bearing system.Maintaining other parameters unchanged and taking rotational speed as the control parameter, the nonlinear dynamic behavior of the rotor supported by three elliptical bearings at different rotating speeds was investigated.Two cases, one in which left rotor has mass eccentricity and the other in which both these two rotors have mass eccentricities, are considered.The value of the mass eccentricity is 0.001 kg⋅m.Figure 12 shows bifurcation diagrams of displacement of the right big disk at different rotating speeds when the radial stiffness of coupling is 1.05, 1.07, and 1.09 N/m for two mass eccentricities.For one mass eccentricity, the bifurcation diagrams of displacement of the right big disk are plotted in Figure 13 when are 1.05, 1.07, and 1.09 N/m.The maximum radial displacements of the right big disk at different rotating speeds and the radial stiffness of coupling for two cases are obtained, as shown in Figure 14.
In nonlinear dynamics, the bifurcation diagram often is used to provide a summary of essential dynamics of the system with varying of governing parameter.As shown in Figures 12 and 13, the most complicated case is Figure 12(a).When rotational speed is small and less than 283 rad/s, the motions of the system are stable with synchronous periodic motion.The system will show double and multiperiodic motion when the speed is between 293 and 346 rad/s.Then the system shows the period-one motion again for a large range from 356 to 775 rad/s.As the rotational speed is changed from 785 to 995 rad/s, the multiperiodic motion appears again.When the rotational speed locates between 1026 and 1340 rad/s, the system will become irregular and enter into the chaotic motion.In a narrow range from 1351 to 1382 rad/s of rotational speed, the system shows the multiperiod motion again.Once the speed is higher than 1393 rad/s, the rotor vibrations are chaotic motion.
For Figure 12(b), with the increasing of rotating speed, the system will experience period-one motion, double periodic motion, period-one motion, double periodic motion, quasiperiodic motion, period-one motion, quasiperiodic motion, chaotic motion, double and multiperiodic motion, and chaotic motion in sequence.When = 1.09N/m and considering two mass eccentricities, the range of rotational speed for chaotic motion is smaller than those when = 1.05 and 1.07 N/m.But the motion is still complicated including period-one motion, double and multiperiodic motion, and chaotic motion.When only left rotor has mass eccentricity, the motions are more complicated for flexible coupling than those for rigid coupling in which the periodone motions and multiperiodic motion are prominent.
From Figure 14, it can be seen that when both two rotors have mass eccentricities, the rub-impact is easier to take place even though the rotational speed is small.For one mass eccentricity or two mass eccentricities, the lower the value of the radial stiffness of coupling is, the more complicated the motion of system is.In order to make the system more stable, it is better to choose the rigid coupling to connect two rotors and avoid large and more mass eccentricities.
Besides the bifurcation diagram, shaft-centre trajectory, amplitude spectrum, and Poincaré map are used to further illustrate the dynamic behaviors of the presented system when given parameters.The results are shown in Figures 15 and 16.When = 869 rad/s and = 1.07N/m for two mass eccentricities, the system exhibits a quasiperiodic motion.The Poincaré section map is a closed curve.The shaftcentre trajectory has a regular pattern.In Fourier amplitude spectrum, there is two-peak amplitude.Nearly all of the vibration energy is distributed between the subsynchronous and synchronous components, but especially in the synchronous component.For one mass eccentricity and keeping the other parameters unchanged, from Figure 16 it can be known that there is only a single point in the Poincaré map and the shaft-centre trajectory is a closed circle.Furthermore, Fourier amplitude spectrum also has one-peak amplitude at 1X component.All these phenomena mean that the system is synchronous period-one motion.
Effect of Stiffness of Coupling on the Dynamic Response.
For multirotor system, the coupling will play an important role in the dynamic behavior of system.In this section, the effect of radial stiffness of the coupling on the dynamic response of the system is discussed in detail.Figure 17 shows the bifurcation diagrams of displacement of the right big disk at different radial stiffness of coupling when the rotational speed is 314, 995, and 1445 rad/s for one mass eccentricity.When the rotational speed is 314 rad/s, Figure 17(a) indicates that the system only appears synchronous periodic motion.For = 995 rad/s, the system mainly exhibits double periodic motion and period-one motion as shown in Figure 17 are shown in Figure 18.There are two discrete points in the Poincaré map and the shaft-centre trajectory is two closed circles.When the coupling is flexible and = 1445 rad/s, the system shows chaotic motion in Figure 17(c).The response characteristics of a chaotic motion are plotted in Figure 19.The shaft-centre trajectory is disorder, and the Poincaré map shows many discrete points.Correspondingly, Fourier amplitude spectrum is continuous and the vibration energy is mainly distributed in the subsynchronous component.But for rigid coupling, the system is period-one motion.All the results reveal that the motion of system is complicated when the coupling is very flexible.
Conclusions
The linear and nonlinear dynamic behavior of a two-rotor system with four disks supported by three elliptical bearings are investigated by using finite element method including the shear effect and gyroscopic effect in this paper.The effect of bearings is expressed in form of stiffness and damping coefficients which are varying with rotating speed.The effects of two important parameters, the rotational speed and the radial stiffness of coupling, on the dynamic response of the system are examined in detail.The linear analysis results show that the critical speed and unbalance response of rotors are sensitive to coupling stiffness in N+1 configuration.Furthermore, two cases, one in which only the left rotor has mass eccentricity and the other in which both two rotors have mass eccentricities, are considered in the nonlinear analysis.For the different combination of system parameters, numerical results can be obtained and plotted by bifurcation diagrams, shaft-centre trajectory, Poincaré map, and amplitude spectrum.The results indicate that the dynamic response of the presented system is very abundant.When the coupling is flexible, the response of system will comprise synchronous motion with period-one, double, and multiperiodic motion, quasiperiodic motion, and chaotic motion with varying of rotational speed.Generally, the dynamic behavior of the system for flexible coupling is more complicated than that for rigid coupling and the system motion generally contains the multiple harmonic components and fractional harmonic components.The system mainly is synchronous periodic motion as the increasing of radial stiffness of coupling.On the other hand, the results also show that when every rotor has mass eccentricity the response appears more nonlinear phenomena than that when only left rotor has mass eccentricity, especially for rigid coupling.According to the above analysis, it can be concluded that, in order to make the system more stable, it is better to use the rigid coupling to connect two rotors and avoid large and more mass eccentricities.The results developed in this study are useful in identifying, understanding, and avoiding some undesirable vibration behaviors in these types of rotating machinery.
REL: Reynolds Lubrication Equation 𝑝:
The film pressure ℎ: The film thickness : Lubricating oil viscosity : The hydrodynamic-active axial length of the journal bearing : Angle variable : The stiffness matrix of the whole rotors system : The damping matrix of the whole rotors system : The gyroscopic matrix of the whole rotors system {}: The displacement vector {}: The load vector.
Figure 1 :
Figure 1: Dynamic stiffness (a) and damping coefficients (b) of bearing with the speed increase.
Figure 2 :
Figure2: The mechanic model of rotor system with radial rubimpact.
Figure 3 :
Figure 3: The finite element model of rotors system.
Figure 4 :
Figure 4: Critical speed map of the rotor-bearing system with N+1 configuration.
Figure 10 :
Figure 10: Disk displacements for the 1.06 N/m, 1.07 N/m, 1.08 N/m, and 1.09 N/m coupling stiffness and 1.0 kg⋅mm ∠0 ∘ mass unbalance at the left big disk.
Figure 15 :
Figure 15: The response characteristics of system at = 869 rad/s and = 1.07N/m for two mass eccentricities.
Figure 16 :
Figure16: The response characteristics of system at = 869 rad/s and = 1.07N/m for one mass eccentricity.
Figure 17 :
Figure 17: The bifurcation diagrams versus the radial stiffness of coupling for one mass eccentricity.
Figure 18 :
Figure 18: The response characteristics of system at = 995 rad/s and = 3.65 N/m for one mass eccentricity.
Figure 19 :
Figure 19: The response characteristics of system at = 1445 rad/s and = 8.05 N/m for one mass eccentricity.
Table 1 :
Parameters of bearings.
Table 2 :
Cases with different unbalance parameters and different values of coupling stiffness. | 5,973.8 | 2015-07-06T00:00:00.000 | [
"Engineering",
"Physics"
] |
Nobiletin, a hexamethoxyflavonoid from citrus pomace, attenuates G1 cell cycle arrest and apoptosis in hypoxia-induced human trophoblast cells of JEG-3 and BeWo via regulating the p53 signaling pathway
Background Hypoxia is associated with abnormal cell apoptosis in trophoblast cells, which causes fetal growth restriction and related placental pathologies. Few effective methods for the prevention and treatment of placenta-related diseases exist. Natural products and functional foods have always been a rich source of potential anti-apoptotic drugs. Nobiletin (NOB), a hexamethoxyflavonoid derived from the citrus pomace, shows an anti-apoptotic activity, which is a non-toxic constituent of dietary phytochemicals approved by the Food and Drug Administration. However, their effects on hypoxia-induced human trophoblast cells have not been fully studied. Objective The aim of this study was to investigate the protective effects of NOB on hypoxia-induced apoptosis of human trophoblast JEG-3 and BeWo cells, and their underlying mechanisms. Design First, the protective effect of NOB on hypoxia-induced apoptosis of JEG-3 and BeWo cells was studied. Cell viability and membrane integrity were determined by CCK-8 assay and lactate dehydrogenase activity, respectively. Real Time Quantitative PCR (RT-qPCR) and Western blot analysis were used to detect the mRNA and protein levels of HIF1α. Propidium iodide (PI)-labeled flow cytometry was used to detect cell cycle distribution. Cell apoptosis was detected by flow cytometry with Annexin V-FITC and PI double staining, and the expression of apoptosis marker protein cl-PARP was detected by Western blot analysis. Then, the molecular mechanism of NOB against apoptosis was investigated. Computer molecular docking and dynamics were used to simulate the interaction between NOB and p53 protein, and this interaction was verified in vitro by Ultraviolet and visible spectrum (UV-visible spectroscopy), fluorescence spectroscopy and circular dichroism. Furthermore, the changes in the expression of p53 signaling pathway genes and proteins were detected by RT-qPCR and Western blot analysis, respectively. Results Hypoxia treatment resulted in a decreased cell viability and cell membrane integrity in JEG-3 and BeWo cell lines, and an increased expression of HIF1α, cell cycle arrest in the G1 phase, and massive cell apoptosis, which were alleviated after NOB treatment. Molecular docking and dynamics simulations found that NOB spontaneously bonded to human p53 protein, leading to the change of protein conformation. The intermolecular interaction between NOB and human p53 protein was further confirmed by UV-visible spectroscopy, fluorescence spectroscopy and circular dichroism. After the treatment of 100 μM NOB, a down-regulation of mRNA and protein levels of p53 and p21 and an up-regulation of BCL2/BAX mRNA and protein ratio were observed in JEG-3 cells; however, there was also a down-regulation of mRNA and protein levels observed for p53 and p21 in BeWo cells after the treatment of NOB. The BCL2/BAX ratio of BeWo cells did not change after the treatment of 100 μM NOB. Conclusion NOB attenuated hypoxia-induced apoptosis in JEG-3 and BeWo cell lines and might be a potential functional ingredient to prevent pregnancy-related diseases caused by hypoxia-induced apoptosis. These findings would also suggest the exploration and utilization of citrus resources, and the development of citrus industry.
H ypoxia is a common risk factor for placenta-mediated pregnancy complications, such as preeclampsia, fetal growth restriction, preterm birth, and stillbirth. Hypoxia-induced apoptosis of trophoblast cells is an important mechanism of these diseases (4). On one hand, the apoptosis of trophoblast cells could undermine the invasion and migration of extravillous cytotrophoblasts (evCTBs), thereby badly impacting the ability of evCTBs differentiating into trophoblast giant cells, which is detrimental to embryo implantation and early pregnancy maintenance (5). On the other hand, hypoxia leads to insufficient differentiation of cytotrophoblasts (CTBs) into syncytiotrophoblasts (STBs), thus affecting the endocrine, protection and migration functions of the placenta (6). Hence, the apoptosis of trophoblast cells might impair the placental function, thereby threatening the health of the mother and fetus.
Citrus is one of the fruits with the highest yield in the world (7). According to previous studies, nobiletin (5,6,7,8,3',4'-hexamethoxyflavone, NOB), a highly methoxylated flavonoid, is one of the by-products in citrus processing, with anti-apoptotic activity. Propofol-induced neuronal apoptosis was attenuated by NOB in ischemic brain injury in male Sprague-Dawley rats (8). Myocyte apoptosis in cardiac hypertrophy was inhibited by treatment with NOB in male C57BL/6 mice (9). Hepatocyte apoptosis after liver transplantation in rats was reduced by NOB (10). Meanwhile, NOB is a non-toxic constituent of dietary phytochemicals approved by the Food and Drug Administration (11). In addition, this research study found that NOB was safe for human choriocarcinoma trophoblast cells (BeWo cells) (12). Based on its anti-apoptotic properties and safety considerations, NOB was hypothesized as a potential phytochemical that might protect the trophoblast cells from hypoxia-induced apoptosis.
JEG-3 and BeWo cells are usually used as trophoblast models. JEG-3 cells are invasive and migratory (13), while BeWo cells could differentiate into STBs (14). In this study, JEG-3 and BeWo cell models were used to evaluate the effect of NOB on the apoptosis of hypoxia-induced trophoblast cells. On this basis, the protective effect of NOB in reducing hypoxia-induced apoptosis was explored through the molecular docking and dynamics simulation of NOB and p53 protein, which is verified in vitro by UV-visible spectroscopy, fluorescence spectroscopy and circular dichroism, and the mRNA and protein expressions of the p53 signaling pathway.
In vitro cell culture JEG-3 cells were seeded at a density of 1 × 10 5 cells/mL in sterile culture flasks, were cultured in DMEM (high glucose) with 10% (v/v) FBS and 100 μg/mL penicillin/streptomycin, and incubated in an incubator containing 95% air and 5% carbon dioxide at 37°C. Seeded at a density of 1 × 10 5 cells/ mL in sterile culture flasks, BeWo cells were cultured in F12 (Ham) medium with 15% (v/v) FBS and 100 μg/mL penicillin/streptomycin, and incubated in an incubator containing 95% air and 5% carbon dioxide at 37°C. Reaching the density of 80%, the cells (JEG-3 and BeWo) were passaged using trypsin-EDTA solution and sub-cultured in new flasks. Cell culture media were replaced every 3 days for BeWo cells and every 2 days for JEG-3 cells. As for JEG-3 cells, NOB and cobalt chloride were, respectively, dissolved in DMEM (high glucose) containing 0.1% DMSO. The DMEM (high glucose) in the control group only contained DMSO (0.1%). With regard to BeWo cells, NOB and cobalt chloride were, 3 (page number not for citation purpose) Nobiletin, a hexamethoxyflavonoid from citrus pomace respectively, dissolved in F12 (Ham) medium containing 0.1% DMSO. The F12 (Ham) medium in the control group only contained DMSO (0.1%).
Hypoxia model
In this research study, cobalt chloride was used to establish a hypoxic model of JEG-3 and BeWo cells. JEG-3 cells were seeded at a density of 1 × 10 5 cells/mL and cultured for 48 h as described above (see section 'In vitro cell culture'). BeWo cells were seeded at a density of 1 × 10 5 cells/mL and cultured for 72 h as described above (see section 'In vitro cell culture'). After removing the media, the cells were cultured in serum-free medium supplied with 500 μM cobalt chloride and NOB at different concentrations (0, 10, 33, and 100 μM). Cells treated without cobalt chloride and NOB was considered as the control group. The cells were collected after another 12 h of culture for the subsequent detections.
Cell viability assay A CCK-8 kit (MultiSciences, Hangzhou, China) was used to determine the viability of cells, according to the manufacturer's recommendations. After hypoxia treatments (see section 'Hypoxia model'), CCK-8 reagent (10 μL) was added to each well of 96-well plates and incubated at 37°C for another 4 h. The absorbance was measured with a microplate reader (BioTek, Winooski, VT, USA) at a wavelength of 450 nm.
Lactate dehydrogenase activity assay
After hypoxia treatment (see section 'Hypoxia model'), the cells were centrifuged for 5 min at 1,500 rpm, and the medium was collected for testing lactate dehydrogenase (LDH) activity. LDH activity was tested using a LDH kit (Jiancheng, Nanjing, China) according to the manufacturer's recommendations. Data were expressed as the percentage release of LDH from the treated cells by comparing it with the control group.
Quantitative RT-PCR analysis
The mRNA expression levels were determined using previously reported methods with minor modifications (15).
PCR was conducted at 95°C for 4 min, followed by 39 cycles for 10 s at 95°C and 45 s at 60°C. Human gene-specific primers were used as shown in Table 1.
Cell cycle analysis
Cell cycle distribution was detected using a cell cycle detection kit (Keygen, Beijing, China) according to the manufacturer's recommendations. A flow cytometer (ACEA, CA, USA) was used for measuring the cells in sub G1, G1, S, and G2/M phases.
Phosphatidylserine protein antibody (Annexin V)-FITC/PI assay
The Annexin V-Fluorescein isothiocyanate (FITC) and propidium iodide (PI) apoptosis detection kit (Keygen, Beijing, China) was used to determine cell apoptosis, according to the manufacturer's instructions. The cells were suspended in 500 μL binding buffer and double-stained with annexin V-FITC/PI for 15 min, and the cell apoptosis was determined using a flow cytometer (ACEA, CA, USA). The laser excitation wavelength was found to be 488 nm; the green signal from annexin V-FITC was observed at a wavelength of 525 nm and the red signal from PI was observed at a wavelength of 620 nm.
Molecular docking and dynamics
AutoDock vina 1.1.2 software was used to study molecular docking. The NOB CDX format files were prepared by ChemDraw software. The PDB format files for protein structures of p53 (PDB ID: 6GGC) were downloaded from the protein data bank. Both NOB and proteins were processed using the AutoDock tools, and parameterized and saved in the pdbqt format. In terms of docking process, the parameter of exhaustiveness was set as 32, and the other parameters were set as default. The conformation with the highest score was selected as the final docking conformation for subsequent molecular dynamics simulation. AMBER 18 was used to conduct the molecular dynamics simulation on the complexes generated by molecular docking. The gaff universal force field was used as NOB force field, and the ff14SB force field was used as protein force field. In the tLeap module, adding hydrogen, water and antagonist ion, the topology file and coordinate file were saved for molecular dynamics simulation. The molecular simulation system was conducted a 50 ns production run after energy minimization, heating and equilibration. The generated molecular simulation trajectory was analyzed by the Cpptraj module, and the Molecular mechanics/generalized born surface area (MMGBSA) algorithm was used to calculate the binding free energy of the small molecules.
UV visible spectroscopy p53 protein was mixed with the same volume of NOB and maintained for 10 min under 37°C. The final concentration of p53 was 1 μM, and those of NOB were 0, 10, 33, and 100 μM. The absorbance of mixture solution was determined at the wavelength range of 240-700 nm.
Fluorescence spectroscopy p53 protein was mixed with the same volume of NOB and maintained for 10 min under 37°C. The final concentration of p53 was 1 μM, and those of NOB were 0, 10, 33, and 100 μM. The fluorescence value of the mixture solution was determined at the wavelength range of 320-440 nm by setting the excitation slit width and the emission slit width as 5 nm, with an excitation wavelength of 280 nm.
Circular dichroism
The circular dichroism of p53 protein was measured using the J-810 circular dichroism spectrometer (JASCO, Tokyo, Japan). P53 protein was mixed with the same volume of NOB and maintained for 10 min at 37°C. The final concentration of p53 was 1 μM, and those of NOB were 0, 10, 33, and 100 μM. The mixture was scanned in the far-ultraviolet region (198-250 nm) with a scanning rate of 50 nm/min at 37°C. The average residue molecular weight was 115 g/mol, and the proportion of each secondary structure (α-helix, β-sheet, β-turn, and random coil) was analyzed using the program built in the device.
Statistical analysis
The data were presented as mean ± SD. All experiments were conducted three times independently. One way analysis of variance (one-way ANOVA) was conducted by Statistical Analysis System (SAS Institute Inc., Cary, NC, USA). The Duncan multiple range test was used to determine the differences in mean values between groups (P < 0.05). All figures were prepared using GraphPad Prism software (version 6.00, GraphPad Software Inc., San Diego, CA, USA).
NOB improved cell viability and cell membrane integrity of JEG-3 and BeWo caused by hypoxia
The CCK-8 assay is a commonly used indicator for evaluating cell viability, representing cell metabolism and mitochondrial activity (16). The degree of damage to the cell membrane can be estimated by detecting the release of LDH, which is a biochemical indicator of the cell membrane integrity (17). The effects of NOB on cell viability and cell membrane integrity of JEG-3 and BeWo were tested by CCK-8 test and LDH assay, respectively. Exposure to hypoxia treatment significantly decreased cell viability and increased LDH activity, compared with those under normoxia treatment (Fig. 2). The treatment Nobiletin, a hexamethoxyflavonoid from citrus pomace of hypoxic cultures with 100 μM NOB significantly increased the cell viability and decreased the LDH activity of JEG-3 and BeWo cells, compared with those exposed to hypoxia treatment. These results revealed that cell viability and cell membrane integrity were decreased by hypoxia, whereas NOB could alleviate this effect.
NOB protected JEG-3 and BeWo cells from biological hypoxia
HIF1α is a sensitive oxygen sensor and a biomarker of hypoxia (18). All mammals control the cellular response to hypoxia by regulating the HIF1α transcription factor family (19). To illustrate the effect of NOB on HIF1α in hypoxia-induced JEG-3 and BeWo cells, HIF1α levels were measured by RT-qPCR and Western blot analysis. HIF1α levels were significantly increased in JEG-3 and BeWo cells under hypoxic environment, which was rescued by the treatment of NOB (Fig. 3), suggesting that NOB could protect cells from biological hypoxia.
NOB reduced G1 phase arrest in hypoxia-induced cells of JEG-3 and BeWo
Cell populations of JEG-3 and BeWo in the G1 phase were significantly arrested, and the number of cells in the S phase was significantly decreased after cells were exposed to hypoxia, compared with those under normoxia ( Table 2). For JEG-3 cells, the G1 and S populations were decreased significantly, while the sub G1 and G2/M populations were significantly increased after treatment with 100 μM NOB. With regard to BeWo cells, after incubation with 100 μM NOB, the G1 populations were markedly reduced, while G2/M populations were significantly increased.
NOB attenuated hypoxia-induced apoptosis in JEG-3 and BeWo cells
To study whether NOB could attenuate hypoxia-induced apoptosis in JEG-3 and BeWo cells, flow cytometric analysis of annexin V or PI double staining was performed by detecting the externalization of phosphatidylserine in the membrane of early apoptotic cells (annexin positive and PI negative) and the loss of cell membrane integrity in late apoptotic cells (annexin positive and PI positive) (20). For JEG-3 and BeWo cells, the early, late, and total apoptotic cells increased significantly in cells under hypoxic environment compared with those under normoxia (Fig. 4). The apoptosis of hypoxia-induced cells was alleviated by treating with NOB at the concentrations of 33 and 100 μM.
Moreover, PARP is an enzyme used for DNA repair and a major substrate for caspase-3 (21). Once PARP is cleaved, it loses enzyme activity and its ability to repair DNA (22). Therefore, the cleaved PARP (cl-PARP) level was detected by Western blot analysis to characterize the effect of NOB (a) The cell activity was measured using a CCK-8 assay. (b) The cell membrane integrity was measured through the viability of LDH. The JEG-3 and BeWo cells were, respectively, exposed to 0, 10, 33, or 100 μM NOB with 500 μM cobalt chloride in Serum-free medium for 12 h. Cells treated without cobalt chloride and NOB for 12 h were taken as the normoxic control group. The values are presented as the mean ± SD of independent experiments (n = 3). All different capital letters indicate significant differences at P < 0.05 using one-way ANOVA. The protein expressions of HIF1α in JEG-3 and BeWo cells were estimated by Western blot analysis. The JEG-3 and BeWo cells were, respectively, exposed to 0, 10, 33, or 100 μM NOB with 500 μM cobalt chloride in Serum-free medium for 12 h. Cells treated without cobalt chloride and NOB for 12 h were taken as the normoxic control group. The values are presented as the mean ± SD of independent experiments (n = 3). All different capital letters indicate significant differences at P < 0.05 using one-way ANOVA.
on caspase-induced apoptosis. As presented in Fig. 5, compared with the cells under normoxia, the level of cl-PARP was statistically increased in hypoxia-induced cells of JEG-3 and BeWo. The cl-PARP of cells treated with 100 μM NOB was reduced compared with those exposed to hypoxia treatment. All of the above results revealed that NOB at 100 μM concentration could reduce the apoptosis of BeWo and JEG-3 cells induced by hypoxia.
NOB and p53 protein combined stably with van der Waals force P53, a major cancer suppressor gene, regulates apoptosis by activating the p53 pathway in response to DNA damage The cell cycle distribution of JEG-3 and BeWo cells was detected by flow cytometry with PI staining. The JEG-3 and BeWo cells were, respectively, exposed to 0, 10, 33, or 100 μM NOB with 500 μM cobalt chloride in Serum-free medium for 12 h. Cells treated without cobalt chloride and NOB for 12 h were taken as the normoxic control group. The values are presented as the mean ± SD of independent experiments (n = 3). For a specific cell cycle distribution (Sub-G1, G1, S, or G2) in the same cell lines, all different capital letters indicate significant differences at P < 0.05 using one-way ANOVA. The apoptosis degrees of BeWo cells. The JEG-3 and BeWo cells were, respectively, exposed to 0, 10, 33, or 100 μM NOB with 500 μM cobalt chloride in Serum-free medium for 12 h. Cells treated without cobalt chloride and NOB for 12 h were taken as the normoxic control group. The values are presented as the mean ± SD of independent experiments (n = 3). All different capital letters indicate significant differences at P < 0.05 using one-way ANOVA.
induced by hypoxia (23). The simulation time of molecular dynamics simulation to study protein structure changes is usually 10-50 ns (24). In this study, the molecular dynamics model with a simulation time of 50 ns was selected to analyze the interaction between NOB and p53 protein. The root mean square deviation (RMSD) is the average deviation between the protein conformation at a specific time and the original structure, which is an important basis for measuring the stability of the system (25). As shown in Fig. 6a, in the first 25 ns of the simulation, the RMSD The cl-PARP levels of BeWo cells. The JEG-3 and BeWo cells were, respectively, exposed to 0, 10, 33, or 100 μM NOB with 500 μM cobalt chloride in Serum-free medium for 12 h. Cells treated without cobalt chloride and NOB for 12 h were taken as the normoxic control group. The values are presented as the mean ± SD of independent experiments (n = 3). All different capital letters indicate significant differences at P < 0.05 using one-way ANOVA. fluctuates greatly, and after 25 ns, it stabilizes. NOB bonded well to the p53 protein, and the binding of NOB was conducive to the stable existence of the p53 protein, with less conformational fluctuations. The radius of gyration (Rg) characterizes the tightness of protein structure (26). As shown in Fig. 6b, during the dynamics simulation process, Rg was stable without major fluctuations, and the p53 protein structure remained compact throughout. Furthermore, the binding free energy was calculated for the 25-50 ns (after dynamics trajectory equilibrium) conformation, and the results are shown in Table 3. NOB and p53 protein bonded well, and its binding free energy was estimated to be −26.4 ± 3.38 kcal/mol; the essential driving force involved was van der Waals interactions. The view of the interaction between NOB and p53 protein after 50 ns dynamics simulation is shown in Fig. 6c. NOB interacted with Leu145, Thr150, Pro151, Thr155, Val157, Cys220, Pro222, Pro223, and Thr230 in the flexible ring region of p53.
The interaction between NOB and p53 protein was further verified in vitro
UV-visible spectrum of NOB and p53 protein The UV-visible spectrum was used to detect whether the ligand was bonded to the protein during the formation of the compound, and the changing value reflexed the change of the secondary protein structure (27). Figure 7a shows that the compound of NOB and p53 protein had three absorption peaks: 250, 270, and 335 nm. With the increase of NOB concentration, the absorption peak rose correspondingly, indicating that NOB changed the secondary structure of p53 protein.
Fluorescence spectroscopy of NOB and p53 protein
The amino acid residue of tryptophan, tyrosine, and phenylalanine in p53 protein emits intrinsic fluorescence when excited at a wavelength of 280 nm. Figure 7b shows the fluorescence quenching of p53 protein with the NOB at different concentrations. With the increase of NOB concentration, the intensity of the intrinsic fluorescence for p53 protein gradually decreased, with the blue shift of peaks (Δλ = 5 nm), indicating that NOB entered the hydrophobic pocket of p53 protein and interacted with the protein, and therefore, the hydrophobicity of the p53 protein increased. To investigate the mechanism of fluorescence quenching, we calculated the quenching constant following the Stern-Volmer formula (28): where F 0 is the fluorescence intensity of free p53 acid (a.u.), F is the fluorescence intensity of p53 protein after the treatment of NOB (a.u.), Ksv is the quenching rate constant, [Q] is the NOB concentration (mol·L -1 ), Kq is the bimolecular quenching rate constant (L·mol -1 ·s -1 ), and τ0 is the average lifetime of the fluorescent molecules in the absence of a quencher. the fluorescence lifetime of biomacromolecules is about 10 -8 s. The calculated quenching rate constant (Ksv) was 2.1 × 10 3 L·mol -1 , and Kq was 2.1 × 10 11 L·mol -1 ·s -1 (Table 4), which is much greater than 2.0 × 10 10 L·mol -1 ·s -1 , the maximum quenching constant of biomacromolecules (29). Hence, it was determined that the quenching mechanism was static quenching. It is also suggested that NOB entered the hydrophobic pocket of p53 protein, and the interaction changed the p53 protein conformation, finally leading to the quenching of p53 protein fluorescence.
Circular dichroism of NOB and p53 protein
Circular dichroism is an excellent method for rapid determination of the secondary protein structures (30). Compared with the non-NOB-treated group, the α-helix and random coil of p53 protein were decreased, and the β-sheet was increased after treatment of NOB at 100 μM ( Table 5). The results revealed that the interaction of NOB and p53 protein led to transformation of the α-helix and random coil of p53 protein into β-sheet.
Regulation of NOB on p53 in hypoxia-induced cells of JEG-3 and BeWo
To investigate the regulation of p53 on apoptosis, p53 and its related genes and proteins were measured by RT-qPCR and Western blot analysis. Compared with the cells under normoxia, the mRNA and protein levels of p53, p21, and BAX were increased in JEG-3 and BeWo cells under the hypoxic environment, while the mRNA and protein levels of BCL2 and the ratio of BCL2/BAX were decreased (Figs. 8,9). Treatment of the JEG-3 cells with NOB at a concentration of 100 μM significantly reduced the mRNA and protein levels of p53, p21, and BAX, and significantly increased the mRNA and protein ratio of BCL2/BAX in hypoxia-induced cells. As for BeWo cells, the mRNA and protein levels of p53 and p21 were decreased after hypoxia-induced cells were treated with 100 μM NOB, while no differences in the mRNA and protein ratio of BCL2/BAX were observed. These findings suggested that p53 was involved in the regulation of apoptosis by NOB.
Discussion
With the development of the food industry, NOB has been developed and used as a natural phytochemical. This research study revealed the anti-apoptotic effect of NOB, which was observed in hypoxia-induced JEG-3 and BeWo cells, as measured by annexin V-FITC/PI and cl-PARP assays. With the treatment of 100 μΜ NOB, there was a decrease in the mRNA and protein levels of HIF1α and the G1 phase arrest. NOB and human p53 protein bonded spontaneously, leading to change in the conformation of p53 protein, which was detected by molecular docking and dynamics. These findings were also proved in vitro by UV-visible spectroscopy, fluorescence spectroscopy, and circular dichroism. JEG-3 and BeWo cells treated with 100 μΜ NOB down-regulated the mRNA and protein expressions of p53 and p21. The mRNA and protein ratio of BCL2/BAX in hypoxia-induced JEG-3 cells was up-regulated after treatment with 100 μM NOB, but did not change the BCL2/BAX ratio for BeWo cells.
HIF1α activation is the most recognized response to hypoxia in mammalian cells, and its activity can be stabilized by CoCl 2 to prevent degradation by pan-hydroxylase (19). After stabilization, HIF1α and its cofactors bind to the HIF response element in the promoters of the target genes to coordinate an extensive transcriptional program targeting the hypoxic environment (31). Annexin V-FITC/PI, a phenotypic indicator of apoptosis, proved that JEG-3 and BeWo cells had undergone apoptosis. The apoptosis marker protein of cl-PARP was also activated during this process. These results suggested that a hypoxia-induced apoptosis model was successfully established by CoCl 2 in this study. Secondary structures of p53 protein were detected by circular dichroism. Under 37°C, p53 protein was mixed with the same volume of NOB and maintained for 10 min. The final concentration of p53 was 1 μM, and those of NOB were 0, 10, 33, or 100 μM. The values are presented as the mean ± SD of independent experiments (n = 3). In the same column, all different capital letters indicate significant differences at P < 0.05 using one-way ANOVA. were, respectively, exposed to 0, 10, 33, or 100 μM NOB with 500 μM cobalt chloride in Serum-free medium for 12 h. Cells treated without cobalt chloride and NOB for 12 h were taken as the normoxic control group. The values are presented as the mean ± SD of independent experiments (n = 3). All different capital letters indicate significant differences at P < 0.05 using one-way ANOVA. and BeWo cells were, respectively, exposed to 0, 10, 33, or 100 μM NOB with 500 μM cobalt chloride in Serum-free medium for 12 h. Cells treated without cobalt chloride and NOB for 12 h were taken as the normoxic control group. The values are presented as the mean ± SD of independent experiments (n = 3). All different capital letters indicate significant differences at P < 0.05 using one-way ANOVA.
Many phytochemicals have protective effects on hypoxia-induced trophoblast cell apoptosis. Rosiglitazone was found to inhibit the activity of caspase-3 and caspase-9, as well as to reduce the TUNEL positive rate in JEG-3 and BeWo cells cultured in 2% O 2 (4). Punicalagin was shown to downregulate p53 and attenuate apoptosis in human placental STB in 1% O 2 (32). Furthermore, melatonin has been reported to inhibit apoptosis because it was observed a decline of cl-PARP level in human placental STB undergo hypoxia (1% O 2 )/reoxygenation (8% O 2 ). Many phytochemicals, such as quercetin, herpersin, and hesperetin, have shown anti-apoptotic effects in HTR-8/ SVneo cells induced by hypoxia (0.2% O 2 )/reoxygenation (95% air, 5% CO 2 ) by inhibiting the activities of caspase-3 and caspase-7 (34). NOB, a kind of flavonoid containing six methoxy groups, has been reported to reduce cardiomyocyte apoptosis in mice with myocardial hypertrophy (9). This study established that NOB reduced the sensitivity of JEG-3 and BeWo cells in response to hypoxia, which was manifested by the decrease in HIF1α levels, and in turn, reduced apoptosis.
The p53 tumor suppressor protein plays an important role in the monitoring of hypoxic stress signals by activating specific transcriptional targets that control the cell cycle arrest and apoptosis (22,25,35). Molecular docking and dynamics simulations showed that NOB and p53 protein bond spontaneously, and van der Waals potential energy was its main driving force. When binding to NOB, the conformations of p53 protein tended to be stable after small fluctuations, and the structures kept compact throughout. It was verified by UV-visible spectroscopy that NOB modified the secondary structure of p53 protein in vitro. This was because NOB entered the hydrophobic pocket of p53 protein and caused static quenching of p53 protein, which was proved by fluorescence spectroscopy. Circular dichroism spectroscopy further confirmed that the interaction of NOB and p53 protein led to the transformation of the secondary structure of p53 protein, α-helix and random coil, into β-sheet. These results suggested that the entry of NOB into cells may affect the function and activity of p53 protein. The post-transcriptional and post-translational modifications of p53 protein have an impact on downstream specific transcription targets.
P53 is accumulated with the activation of HIF1α in the hypoxic region (36), as transcriptionally active p53 is stabilized through a physical association with HIF1α (37), and the direct binding between HIF1α and MDM2 suppressed MDM2-dependent ubiquitylation of p53 in vivo and p53 nuclear export (38). P21, a downstream gene of p53, is required for p53-mediated G1 and G2 cell cycle arrest, of which p21 is more effective in preventing G1 progression (39). If hypoxia-induced DNA damage accumulation is severe or unrepairable, the failure of cells to exit the cell cycle from incomplete mitosis may lead to induction of apoptosis (40). Many natural products, especially those from medicinal and food plants, have been reported to target p53 to inhibit apoptosis. Punicalagin was found to attenuate hypoxia-induced apoptosis by reducing p53 and HIF1α levels in cultured human placental STB (32). Curcumin and silymarin were shown to inhibit paracetamol-induced hepatocellular apoptosis by observing an apparent increase in the number of p53-stained cells in adult male albino rats (41). Lycium barbarum polysaccharides were reported to protect H9c2 cells from hypoxia-induced apoptosis by down-regulation of p53 (42). Glucomoringin isothiocyanate was found to reduce apoptosis in H 2 O 2 -induced SHSY5Y cells by inhibiting p53 activity (43). In this study, p53 was involved in the regulation of apoptosis by NOB, which was proved by the decrease of expression levels of p53 and p21, as well as cell cycle arrest in the G1 phase.
P53 is entirely nuclear, and it is certainly conceivable that a portion of hypoxia stabilized p53 translocates from the nucleus to the cytoplasm (44). BAX and Sival are transcriptional target genes of p53, and Sival can reduce the levels of downstream BCL2 and BCLXL. Therefore, the balance between anti-apoptotic and pro-apoptotic members of BCL2 family can be regulated by p53 to affect the outcome of mitochondrial-dependent intrinsic apoptosis. These apoptotic effects were neutralized by the formation of heterodimeric complexes between BAK and BCL2 proteins (45). The literature has shown the anti-apoptotic activities of some phytochemicals through the intrinsic apoptosis pathway. Apoptosis in IND-induced GES-1 cells was inhibited by total triterpenoids through the up-regulation of BCL2, BCLXL, and BCL2/BAX ratio (46). Apoptosis of Aβ-induced PC12 cells was attenuated after the treatment of schisandrin and nootkatone by increasing the level of Bcl-2 and decreasing the levels of BAD and BAX (47). Mitochondria-mediated apoptosis was decreased by the treatment of 5-heptadecylresorcinol, which increased the BCL2/BAX ratio in neurocytes (48). Acrylamide-induced neuronal apoptosis was attenuated by the treatment of garcinol, which reduced the BCL2/ BAX ratio in the brain of zebrafish larvae (49). This study found that 100 μM NOB increased the ratio of BCL2/ BAX in JEG-3 cells, but no change was observed in BeWo cells, suggesting that NOB might protect JEG-3 cells against hypoxia by the p53-mediated intrinsic apoptosis pathway, and the anti-apoptotic effect of NOB might be due to other p53-mediated pathways, such as external apoptotic pathways in BeWo cells.
Conclusions
The G1 cell cycle arrest and apoptosis induced by hypoxia were attenuated by the treatment of 100 μM NOB in JEG-3 and BeWo cells. With regard to JEG-3 cells, NOB protected JEG-3 cells from hypoxia through the p53-mediated intrinsic apoptosis pathway. For BeWo cells, this anti-apoptotic effect was specific to the p53, while the intrinsic apoptosis pathway was not involved. These findings suggested that NOB was an effective natural product with promising development to prevent hypoxia-induced apoptosis in trophoblast cells. | 7,543.2 | 2021-09-24T00:00:00.000 | [
"Biology"
] |
Different Levels of Autophagy Activity in Mesenchymal Stem Cells Are Involved in the Progression of Idiopathic Pulmonary Fibrosis
Idiopathic pulmonary fibrosis (IPF) is an age-related lung interstitial disease that occurs predominantly in people over 65 years of age and for which there is a lack of effective therapeutic agents. It has demonstrated that mesenchymal stem cells (MSCs) including alveolar epithelial cells (AECs) can perform repair functions. However, MSCs lose their repair functions due to their distinctive aging characteristics, eventually leading to the progression of IPF. Recent breakthroughs have revealed that the degree of autophagic activity influences the renewal and aging of MSCs and determines the prognosis of IPF. Autophagy is a lysosome-dependent pathway that mediates the degradation and recycling of intracellular material and is an efficient way to renew the nonnuclear (cytoplasmic) part of eukaryotic cells, which is essential for maintaining cellular homeostasis and is a potential target for regulating MSCs function. Therefore, this review focuses on the changes in autophagic activity of MSCs, clarifies the relationship between autophagy and health status of MSCs and the effect of autophagic activity on MSCs senescence and IPF, providing a theoretical basis for promoting the clinical application of MSCs.
Introduction
Idiopathic pulmonary fibrosis (IPF) is a chronic, progressive, irreversible, and fatal lung disease marked by lung scarring, with an average life expectancy of 3-5 years after diagnosis [1][2][3][4].IPF primarily affects middle-aged and older adults; the prevalence of IPF increases with age among the numerous countries studied, with a high rate over 65 years [5].The pathogenesis of IPF hinges on sustained or repetitive lung epithelial injury, which triggers the activation of fibroblasts and subsequent myofibroblast differentiation [6].Two new approved therapies by the FDA, namely, pirfenidone and nintedanib, exhibit modest effectiveness in mitigating the decline in lung function over a 1-year follow-up period [7][8][9][10].Nonetheless, these groundbreaking antifibrotic therapies are still in their nascent stages and are not frequently recommended for patients with a milder or stabilized course of the disease, primarily owing to the substantial incidence of side effects [10,11].Lung cancer frequently arises as a complication of IPF, with one-fifth experiencing acute exacerbations after treatment [12].
Cellular therapy for pulmonary fibrosis (PF) encompasses the application of mesenchymal stem cells (MSCs) [13].MSCs are multipotent cells with the ability to differentiate into diverse cell types and bestow immunomodulatory, antiproliferative, and anti-inflammatory effects [14].However, a multitude of internal and external factors have prompted alterations in the health status of MSCs, thus influencing their capacity to effectively facilitate the repair and regeneration of damaged lung tissue as therapeutic cells [15].The regulation of autophagy within MSCs stands as a potential mechanism that could influence the properties of MSCs and potentially impact their regenerative and therapeutic potential [16].Autophagy serves as the principal cellular process for breaking down and recycling intracellular proteins and organelles in various physiological and pathological contexts [17].Impairment of autophagy fails to efficiently rectify malfunctioning organelles and eliminate detrimental metabolites within MSCs, ultimately resulting in the senescence of MSCs [18].Excessive autophagy will lead to apoptosis of MSCs, affect the renewal ability of MSCs, and ultimately lead to the inability of MSCs to repair damaged lung tissue, accelerating the occurrence of IPF [19].Therefore, the change of autophagy activity is closely related to the health status of MSCs.
In recent years, more and more researches have been committed to investigating the regulative network of autophagy in IPF [20,21].Autophagy is like a double-edged sword, indicating that autophagy activity may be a significant driving factor for IPF development [22].Basal autophagy activity maintains pulmonary homeostasis in a cellular protective manner; it can selectively degrade potentially detrimental cytoplasmic substances, uneliminated proteins, and some unfavorable microorganisms, such as damaged organelles, viruses, protists, and bacteria [23].In this review, this paper provides a focused review of the aging characteristics and functional changes of MSCs in IPF, as well as the mechanisms of autophagic activity affecting the health status of MSCs, to promote a more comprehensive application of MSCs in regenerative medicine.
The Emerging Role of Autophagy in IPF
2.1.The Biological Function of Autophagy.Autophagy represents the predominant cellular mechanism not only responsible for a bulk recycling system but also for targeting specific organelles, protein complexes, protein aggregates, and invading pathogens for catabolism [17].According to the mechanism used to deliver cargo to the lysosome, autophagy can be classified as microautophagy, chaperone-mediated autophagy, and macroautophagy (MA) [24].
The mammalian target of rapamycin (mTOR) kinase is a conserved protein kinase involved in a multitude of cellular processes including nutrient sensing, cell growth, and autophagy, which is a signaling control point downstream of growth factor receptor signaling, hypoxia, ATP levels, and insulin signaling [25,26].mTOR kinase is a downstream effector of the PI3K/Akt pathway, signaling in the presence of nutrients and promoting cellular growth by stimulating the expression of ribosomal proteins and enhancing protein translation [27].Crucially, mTOR also functions to suppress autophagy in these growth-favorable circumstances [28].The activity of mTOR kinase is inhibited by signals that detect nutrient deficiency, such as hypoxia [29].Upstream of mTOR, when cellular ATP levels are low, the activation of adenosine 5 ′ -monophosphate (AMP)-activated protein kinase (AMPK) enhances the inhibitory function of the Tsc1/Tsc2 tumor suppressor proteins on Rheb, a small GTPase essential for mTOR function [30].Consequently, decreased mTOR activity triggers autophagy, thereby ensuring that the cell adapts to its changing environment by slowing down growth and increasing catabolic processes.
Autophagy occurs constitutively in all eukaryotic cells and operates at fundamental levels, assuming a homoeostatic mechanism by regulating the degradation of molecules and the turnover of organelles [16].In this context, autophagy is directed toward the degradation of misfolded protein cargos, thereby preventing the accumulation of the relevant proteins and consequent toxicity that may ultimately result in cellular damage and mortality [31].Autophagy is rapidly induced under conditions of glucose or amino acid deprivation, oxidative stress, hypoxia, and exposure to xenobiotics, all of which may initiate or exacerbate cellular injuries [32].Therefore, autophagy is not only a dynamic adaptation pathway but also safeguarding of proteome integrity and energy metabolism.Paradoxically, excessive autophagy has been observed in association with cell death; controlled autophagy is protective by providing essential substrates [33].However, to aviod confusion, the term "autophagic cell death" has been restated as "cell death with autophagy" to describe cell death that is suppressed by inhibition of the autophagy pathway and led to a disruption in the autophagic flux [34].Autophagic flux refers to the whole process of autophagy, and there are various methods to monitor autophagy [35].An ideal method to assess autophagic activity is measuring the LC3-II levels, but it is crucial to complement this with an examination of substrate degradation (e.g., SQSTM1/p62) [35].Furthermore, confirming changes in autophagic flux can be achieved through genetic modifications (like using short interfering RNA for ATG genes), using pharmaceutical inhibitors such as 3-methyladenine (3-MA) and chloroquine, or employing inducers like rapamycin [35].
The Role of Autophagy in IPF.
IPF is a fatal chronic interstitial lung disease that impacts both lung mechanical functions and gas exchange.With the emergence of advanced molecular diagnostics, it is increasingly apparent that the pathogenesis of IPF is intricate, involving multiple molecular pathways, and thus is likely to necessitate diverse treatment strategies [6,36].
Altered autophagy in fibroblasts has also been documented as a crucial factor in the pathogenesis of human IPF [37].Notably, autophagic activity was abnormally low in IPF fibroblasts, which was attributed to the low expression of FoxO3a leading to a reduced level of LC3B transcription, ultimately causing a decreased autophagic flow in fibroblasts [38,39].Defective autophagy is necessary to maintain a cell deathresistant phenotype in fibroblasts within a collagen-rich matrix [20,38].The potential profibrotic function of autophagy in IPF fibroblasts necessitates a reevaluation of the utilization of autophagy activators in the treatment of IPF, with a focus on context-specific approaches.
Autophagy is also involved in promoting profibrotic effects in IPF fibroblasts, so the utilization of autophagy activators for the treatment of IPF requires a context-specific approach.Recent evidence highlights the pivotal contributions of disrupted mitochondrial homeostasis in alveolar epithelial type II cells (AECIIs), fibroblasts, and alveolar macrophages (AMs) to the pathogenesis of IPF [40] (Figure 1).For instance, the accumulation of dysmorphic and dysfunctional mitochondria 2 Stem Cells International within AECIIs has been reported in the pulmonary of IPF patients [40].The compromised mitochondria in AECIIs are linked to reduced PINK1 levels and impaired mitophagy.PINK1-deficient mice demonstrate disrupted mitochondrial homeostasis and the onset of PF [40].The expression of PARK2, another protein associated with mitophagy, is decreased in the lung fibroblasts of IPF patients.PARK2 deficiency exacerbates bleomycin-induced PF in mice by enhancing myofibroblast differentiation and proliferation through the promotion of the PDGFR-PI3K-Akt signaling pathway [41].Pirfenidone, an FDA-approved therapy and an exciting landmark in the field of IPF treatment, exerts its antifibrotic effects partially through the induction of PARK2-mediated mitophagy and the inhibition of myofibroblast differentiation [42].Mitophagy, a subtype of macroautophagy, is elevated in profibrotic AMs [43].
During the fibrotic process, Akt1-mediated mitochondrial reactive oxygen species (ROS) induction triggers mitophagy in AMs, thereby influencing macrophage apoptosis resistance and the expression of TGF-β1 [43].The TGF-β1 derived from AMs is required for PF, which promotes the differentiation of fibroblasts into myofibroblasts and the development of PF [43].Furthermore, AECIIs treated with TGF-β1 were shown to induce mitophagy but TGF-β1 reduced mitophagy in fibroblasts by activating Akt in IPF lungs [43].Considering the varying impact of mitophagy on different cell types in the development of IPF targeting cell type-specific mitophagy could lead to more effective therapeutic results in the treatment of IPF.
Role of Autophagy in the Therapeutic Potential of MSCs
Since 1995, first tested MSCs have been gained wide popularity and extensively studied in preclinical model [44].MSCs afford several advantages, such as easy accessibility, low immunogenicity, and therapeutic potential in regenerative medicine [45].
Due to these properties, MSCs have become very promising tool for therapy in different disease types and ideal cells in the treatment of IPF [46].Initially, the beneficial effects of MSCbased therapies were attributed to the replacement capacity of MSCs [47].However, this view has not stood the test of time; studies have revealed that structure and function of injured tissues by direct cell replacement are not the primary property of MSCs [48,49].Research to date have demonstrated that MSCs-derived secretome, which comprises a series of bioactive molecules and extracellular vesicles (EVs), plays a key role in immune modulation and promoting tissue repair [50,51].The keratinocyte growth factor (KGF), hepatocyte growth factor (HGF), and epidermal growth factor (EGF) derived from MSCs are helpful in tissue repair promoting effects.MSCderived vascular endothelial growth factor (VEGF) has also been studied extensively for its angiogenic properties, which promote reepithelialization and angiogenesis [52].MSCs reprogram proinflammatory macrophages (M1) toward an antiinflammatory phenotype (M2) resulting in exerting antifibrotic effects [53].Furthermore, MSCs exert potent antifibrotic effects via modulating the ratio of metalloproteinases/ metalloproteinase tissue inhibitors [54,55].Given that IPF is an age-related disease, recent studies have found that MSCs exhibit aging under sustained pathological conditions such as chronic injury and oxidative stress, which affects the therapeutic activity of MSCs and leads to PF [56] (Figure 2).Recently, it has been proposed that autophagy in MSCs is potentially a new approach for improving therapeutic effects of MSCs (Table 1).Autophagy plays a dual role in MSCs: (1) Modulating autophagy in MSCs may control the proliferation, activation, and effector function of MSC; (2) MSCs are able to modulate the autophagy of immune and other cells that play an important role in the pathogenesis of inflammatory lung diseases [67].Both of these mechanisms eventually affect the efficency of MSC-based therapy.The initial observation indicating the crucial involvement of autophagy in MSC processes was the disparity in autophagosome quantities
Diabetic kidney disease
MSCs diminish cell death in kidney tissue facing diabetic kidney disease, culminating in podocyte maintenance, and also downregulating the over induction of the autophagy pathway A double-edged sword [66] 4 Stem Cells International between undifferentiated MSCs and their differentiated counterparts [68].Furthermore, the hindrance in the fusion between autophagosomes and lysosomes, resulting in the obstruction of autophagosome degradation, culminates in the accumulation of autophagosomes within undifferentiated MSCs [69].
A recent study indicates that inhibiting autophagy enhances the immune-suppressing abilities of MSCs [70].The research reveals that reducing the expression of Becn1 gene in MSCs (short hairpin Becn1-MSCs) strengthens their therapeutic and immune-modulating effects [70].Notably, when treated with these modified short hairpin Becn1-MSCs, a more pronounced decrease in the populations of CD4+ and CD8+ T cells, as well as a reduced proliferation of MOG (myelin oligodendrocyte glycoprotein)-specific CD4+ T cells, is observed, all without impacting the polarization of T cells [70].Similar results were achieved when these mice received MSCs that had been pretreated with an autophagy inhibitor [70].
The modulation of MSC autophagy can significantly influence their secretion capacity, thereby impacting their overall functionality [71].Notably, when MSCs are pretreated with the autophagy-inducer rapamycin and subsequently subcutaneously injected, it results in an augmentation of their woundhealing potential.This enhancement is closely linked to the promotion of angiogenesis, driven by the autophagy-induced secretion of VEGF [71].Conversely, MSCs in which BECN1 is silenced, causing an early blockade of the autophagic machinery, exhibit a diminished therapeutic effect [71].
Thus, modulation of autophagy in MSCs seems to be a potential target to enhance the therapeutic properties of MSCbased therapy, but great action needs to be taken, and further studies should be conducted.
Importance of Autophagy in Maintaining
Healthy MSCs 4.1.Excessive Autophagy Promotes Apoptosis of MSCs.MSCs are a heterogeneous population of multipotent stromal stem cells that can be easily isolated from a variety of different sources [72].MSCs offer diverse benefits that stem from their and the ability to differentiation into osteoblasts, chondrocytes, and adipocytes under appropriate and specific stimuli [73,74].Additionlly, MSCs exert an immunomodulatory effect on innate and adaptive immune responses via interaction with the inflammatory microenvironment [75,76].Therefore, MSCs have been widely used in clinical trials to treat autoimmune and inflammatory diseases, particularly in the context of lung injuries [77].However, there is a lack of comprehensive understanding regarding the precise impact of the inflammatory microenvironment on the fate of MSCs.
The inflammatory microenvironment plays a key role in mediating immunoregulatory capability of MSCs [76,78].MSCs exert enhanced immunosuppressive functions after interaction with inflammatory cytokines, including interferon (IFN)-γ, tumor necrosis factor (TNF)-α, interleukin (IL)-1α, and IL-β [74,79] (Figure 3).Related literature has shown that both fetal and adult MSCs are susceptible to lysis by IL2-activated natural killer cells [80].Furthermore, IFN-γ synergistically amplifies TNFα-induced apoptosis in MSCs, thus impeding their capacity to repair damaged lung tissue, indicating that apoptosis of MSCs could be induced in the inflammatory microenvironment during the development of PF [81].
Recent research has demonstrated that TNF-α and IFN-γ inflammatory cytokines such as IFN-γ and TNF-α activate autophagy in MSCs by upregulating Beclin 1 expression, which attenuates the immunosuppressive capacity of MSCs [19].Although autophagy has been considered a cell survival mechanism, it can also promote cell death depending on the specific physiological and pathological conditions; the dual function of autophagy in prosurvival and prodeath remains incomplete [82,83].Autophagy constitutes major adaptive (survival) strategy of cells in response to challenges such as starvation, growth factor withdrawal, and neurodegeneration but is also a critical contributor to the death of certain types of cells [84,85].There is evidence to support autophagy promoted TNF-α plus IFN-γ-induced apoptosis of MSCs, highlighting the varied functions of autophagy under conditions of inflammation and nutrient scarcity [19].Consequently, it is feasible to consider the manipulation of autophagy in MSCs as a means to optimize therapeutic effectiveness.
Impact of Declined Autophagy on MSCs Aging.
As MSC populations with systematic age, they undergo functional deterioration and less effective in vivo or extended culture in vitro, limiting their therapeutic applications [86][87][88].The underlying processes that drive MSCs senescence remain unclear, but significant progress has been made in elucidating the aspects of age-related MSCs phenotypic changes as well as possible mechanisms that influence MSCs senescence [89].
Autophagic activity tends to decrease with age across various model organisms, potentially leading to the buildup of autophagic structures and constraining the capacity for maintaining cellular homeostasis in certain contexts [90][91][92].Human cell studies have revealed that age-related declines in the breakdown of lysosomal proteins hinder the autophagic flux, worsening cellular damage and playing a role in the onset of age-related diseases [93][94][95][96].Additional evidence has substantiated that aging is linked to a diminished expression of several Atg genes, including Atg2 and Atg8a, which play a crucial role in both the initiation and functionality of autophagy [97].In normally aged mice, autophagy was significantly reduced, as indicated by decreased levels of Atg7, LC3-II, autophagosome, autophagolysosomal fusion, autophagy substrates, and autophagy receptor [98].Consistent with this, autophagy was attenuated in both aged rat brain tissue and aged human fibroblasts, as evidenced by significantly decreased levels of autophagy-associated proteins, such as Atg5-Atg12 and Becn1, and significantly increased levels of mTOR and ferritin H [99].In normal older human brain samples, the expression of key autophagy genes like Atg5 and Atg7 was also reduced [100].Additionally, several age-related human pathologies are closely linked to deficits in autophagy that develop and progress with age [101][102][103].Taken together, compromised autophagy is a characteristic of organismal aging, as autophagy Stem Cells International abundance declines with age and cargo is not delivered to the lysosomes as efficiently.
On the contrary, research on long-lived mutant animals has revealed that increased autophagy is linked to delayed aging.Specifically, the prolonged lifespan observed in C. elegans daf-2 loss of function mutants relies on autophagic genes like bec-1, lgg-1, atg-7, and atg-12 [92,104,105].Moreover, the extended longevity in various longevity mutants, including daf-2 mutants with reduced insulin/insulin-like signaling, germline-less glp-1(e2141) mutants, dietary-restricted eat-2 (ad1116) mutants, mitochondrial respiration-defective clk-1 (e2519) mutants, and mRNA translation-impaired rsks-1 (sv31) mutants, necessitates the presence of HLH-30 [106].Activation of autophagy with rapamycin could restore the proliferative function of aged MSCs [107].These findings align with evidence of reduced induction in autophagosome formation and lysosomal degradation in the absence of HLH-30, suggesting that HLH-30 plays a pivotal role in promoting longevity by regulating the autophagic process downstream of various lifespan-extending mechanisms [106].Further, the formation of long-lived dauer worms, which correspond to a larval hibernation stage, is correlated with increased autophagy and depends on autophagy genes atg-1, atg-7, lgg-1, and atg-18, demonstrating the importance of autophagy to organismal adaptation in challenging conditions [105].However, impaired autophagy could increase ROS and lead to MSC aging [108].Similarly, high glycemic treatment of MSCs increased ROS-mediated autophagy, leading to the formation of Beclin-1, Atg5, Atg7, Atg12, and LC3-II autophagosomes, which induced MSC aging and local inflammation [109].
Together, collective research suggested that (1) autophagy is impaired during as MSCs undergo aging, (2) autophagy dysfunction shortens the lifespan of MSCs, and (3) enhancing or restoring autophagy prolongs the lifespan and extends the healthspan of MSCs (Figure 4).This demonstrates that autophagy regulation is central to the aging of MSCs (Table 2).
Targeting Autophagy in IPF
Treatment choices for IPF are quite restricted.While recent trials have demonstrated the effectiveness of pirfenidone and nintedanib in slowing the decline of lung function in IPF patients, no medication can reverse or entirely prevent the progression of IPF [110,111].IPF has emerged as the most prevalent indication for lung transplantation, with a 5-year survival rate posttransplant just slightly exceeding 50% according to the International Society of Heart and Lung Transplant (ISHLT) registry [112,113].However, lung transplants continue to face significant clinical constraints, primarily due to the shortage of available donors [114].In addition to Stem Cells International investigating autophagy mechanisms in IPF, multiple drugs have been introduced to mitigate the progression of the disease [115].Furthermore, an array of compounds with therapeutic potential in IPF by modulating autophagy are steadily emerging [116,117].
Current Drugs to Treat IPF
In the past 10 years, researchers have spent a lot of effort on IPF drug design, but still only two approved drugs, pirfenidone (Pirfenidone) and nintedanib (NIT), have been used in patients with IPF.Pirfenidone and nintedanib have yielded a discernible elevation in mortality and PF progression among IPF patients under clinical observation [118].Pirfenidone exerts its antifibrotic effects primarily through inhibition of TGF-β1, a critical mediator involved in IPF development [119,120].Pirfenidone, an oral pyridine, reduces extracellular matrix (ECM) deposition via interfering with collagen production and fibrinolytic processes by reducing the production of certain tissue necrosis factors and growth factors [121][122][123].Notably, pirfenidone can activate ATG7-and ATG5-dependent canonical autophagy in lung fibroblasts, as a decrease in EGFP-LC3 dot formation as well as LC3 conversion from LC3-I to LC3-II was observed when ATG5 and ATG7 were knocked down [42].Although pirfenidone induced autophagy has been clearly demonstrated, the precise mechanism of pirfenidone inhibiting lung fibrosis via autophagy during IPF pathogenesis should be futher examined.
Stem Cells International
Nintedanib is another therapeutic medication possessing antifibrotic attributes, operating as a multityrosine kinase inhibitor (MTKI) [124].Nintedanib can inhibit the fibrosis process by targeting PDGFRα-β, FGFR1-3, VEGFR1-3, and SFK [125][126][127][128][129]. Nitidanib has shown antifibrotic and antiinflammatory activity in animal models of lung fibrosis, interfering with fibrotic processes such as fibroblast proliferation, migration, and differentiation and significantly reducing the deposition of lung collagen [130,131].In addition, efficacy and safety of nintedanib in patients with IPF have been demonstrated in phase 3 clinical trials, reducing the decline in forced vital capacity (FVC) and slowing the progression of fibrosis [120].Furthermore, certain studies have substantiated the ability of nintedanib to restrain the growth of specific lung vascular cells, including endothelial cells and pulmonary artery vascular smooth muscle cells [131].Notably, the research revealed that nintedanib effectively boosted autophagy by assessing the LC3-I/II ratio [132].Another investigation produced consistent findings, confirming that nintedanib enhanced autophagic flux in fibroblasts confirmed by observing increased LC3-II formation and induced Beclin-1-dependent, ATG7-independent autophagy in fibroblasts [133].Presently, due to extensive research into autophagy regulation, several autophagy-targeted pulmonary antifibrotic treatments have been identified [134,135].
Potential Compounds to IPF
Amounting research mainly to identify the new molecular targets and therapy choices.Berberine, an important protoberberine alkaloid, shows various pharmacological activities that have been widely used in different therapeutic areas [136].Berberine is extensively distributed in a variety of herbs and its synthetic derivatives have gained significant interest in clinical applications [136].Importantly, berberine as an autophagy modulator can be efficient against PF via modulating autophagy [137,138].Berberine can remarkably enhance the expression of LC3 and Beclin-1, while significant attenuation of p-mTOR, Akt, and MAPK signaling pathways, thereby stimulating autophagosome formation and initiating autophagy [139,140].
Spermidine, an autophagy-inducer, enhances Beclin-1dependent autophagy and autophagy modulators in IPF fibroblasts and bleomycin-induced mouse lungs [141].Specifically, spermidine upregulated autophagic flux, leading to an increase in the LC3B-I/II ratio and the expression of ATG7 and Beclin-1 in IPF fibroblasts and bleomycin-induced mouse lungs [141].In addition, spermidine can reverse autophagy impairment by decreasing the expression of p-mTOR in bleomycin-induced lungs [141].These finding demonstrate that spermidine enhances autophagy and that this effect may hold promise in the treatment of IPF.
Immune checkpoint PD-1 play a critical role in controlling inflammatory response to injury in the normal lung tissues.Programed death ligand-1/programmed cell death 1 (PD-L1/PD-1) axis is one of the most essential immune checkpoints in regulating immunotherapy.In IPF patients, PD-L1 was found to have overexpression on alveolar macrophages (AMs) but was negative on fibroblasts and myofibroblast membranes [142,143].Blocking PD-L1 can reverse PF by increasing phagocytosis of profibrotic fibroblasts in vivo mouse model of fibrosis [144].The anti-PD-L1 monoclonal antibody (anti-PD-L1 mAb) has been discovered to significantly inhibit the proliferation and migration of lung fibroblasts and reduce the deposition of ECM [145].It can increase the expression of the autophagy-related marker protein SQSTM1 and the accumulation of LC3II, promote the formation of autophagosomes, and ultimately induce autophagy activation in PF [145].These evidences show that anti-PD-L1 therapy has the potential to alleviate PF, offering a novel approach to treating IPF.
Bergenin, a compound derived from a variety of medicinal plants, is a major component from Bergenia stracheyi (Saxifragaceae) [146].Bergenin could attenuate bleomycininduced PF in mice by suppressing the myofibroblast activation and promoting the autophagy and the apoptosis of myofibroblasts [147].The study revealed that berberine significantly reduced the phosphorylation levels of mTOR, ULK1, and S6 and decreased the expression levels of typical fibroblast activation markers α-SMA and ECM protein collagen I, thus promoting autophagy and alleviating PF [147].
Moreover, bergenin has the potential to maintain normal autophagy and apoptosis balance in IPF fibroblasts by modulating energy metabolism [147].Overall, there is a pressing requirement for additional investigations and animal model assessments to facilitate the development and validation of novel therapeutic agents for IPF that specifically target autophagy.
Conclusion
With the developments in regenerative medicine technology, stem cell therapy has been tested for safety and efficacy in various lung diseases.However, the abnormal health status of MSCs can affect their own therapeutic function, especially in IPF.The new evidence indicates that modulation of autophagy in MSCs plays an important role in the therapeutic action exerted by MSCs.To either induce or inhibit autophagy activity in lung tissue microenvironment can affect the ability of MSCs to repair damaged tissues, specially IPF.Elevating autophagy generally enhances cellular functions and maintains homeostasis, contributing to prolonged lifespan and improved pulmonary health.However, it is crucial to recognize that a substantial increase in autophagy may potentially reduce lifespan and adversely affect lung health.The therapeutic targeting of autophagy in aging and agerelated lung diseases is contingent upon the specific autophagic defects present in different cell types.From existing literature, it can be postulated that enhancing autophagic activity to augment MSCs function in IPF represents a promising therapeutic strategy to enhance lung function in the elderly.The sustained health benefits for MSCs are likely to result from achieving an optimal balance of autophagy and are influenced by both lung tissue and organismal age.This review aims to provide more comprehensive insights Forced vital capacity PD-1: Programed death-1 PD-L1: Programed cell death ligand-1 AMs: Alveolar macrophages Anti-PD-L1 mAb: anti-PD-L1 monoclonal antibody.
TABLE 1 :
Role of autophagy in the therapeutic potential of MSCs.
TABLE 2 :
Autophagy influences MSCs activity and aging.In MSCs, promoted autophagy partially reverses the aging of MSCs, while declined autophagy attenuates the biological functions of MSCs.Autophagy modulation on apoptosis and aging in MSCs.
8
Stem Cells International into how autophagy affects the therapeutic properties of MSCs, thereby broadening the horizon of clinical utilization of MSCs for the treatment of IPF.The development of novel MSC therapies targeting the autophagy signaling pathway may provide an innovative and attractive approach to the field of regenerative medicine. | 6,043.8 | 2024-02-15T00:00:00.000 | [
"Medicine",
"Biology"
] |
CONCEPTUALIZING INTEGRATED POLICYMAKING: DOES THE DIVERSIFICATION OF ENVIRONMENTAL POLICY INSTRUMENTS CONTRIBUTE TO INCREASED SUSTAINABILITY?
Most urgent societal issues are crosscutting the boundaries of established jurisdiction. The conventional environmental policy domain is unable to achieve environmental objectives by itself, and each policy sector must integrate environmental objectives. For instance, the lack of clarity of how the integration of environmental objectives into energy policy has transformed and modified energy policy is the reason behind the low levels of integrated policy-making achieved. The present research attempts to clarify how the diversification of environmental policy instruments contributes to integrated policy-making. The present research explicitly confirms that that an increase in the extent of inclusion of environmental policy instruments within relevant policy domains increases to the extent of diversification of environmental policy instruments; that an increase in the extent of inclusion of environmental policy instruments within relevant policy domains and structures that coordinate and monitor efforts within relevant policy domains increases to the extent of diversification of environmental policy instruments; and, that an increase in the extent of inclusion of environmental policy instruments within relevant policy domains ultimately resulting in a cross-sectoral instrument blend results in the increase in the extent of diversification of environmental policy instruments.
INTRODUCTION
Most severe societal issues over health, food, energy, transport, climate, innovations, freedom, etc., are crosscutting the boundaries of established jurisdiction, namely through policy domains and governance levels. Increasing requirements for integrated policymaking over these issues are apparent; however, obstacles for integrated policymaking, such as inappropriate instrument mixes, competing attention over issues, competing and incoherent objectives, fragmentation, compartmentation, etc., are making it harder to achieve. These obstacles gain in relevance when these societal issues are confronted with hierarchical governance and its traditional forms of subsystem involvement.
In hierarchical governance and its traditional forms of subsystem involvement, policymaking takes place with a relatively stable set of actors, each of whom displays a set of beliefs, interests, and perceptions, and which genuinely do not allow for an integrated policymaking approach towards these issues. To further add to the complexity of integrated policymaking, the number of actors taking part in the policymaking has increased, namely due to increasing emphasis on public participation of private industry, public administration, interest groups, civil society, the general public, etc.
Most severe societal issues that are crosscutting the boundaries of established jurisdiction link the incompatible objectives of economic competitiveness, social development, and environmental protections with the concept of sustainability or sustainable development (SD). Furthermore, these severe issues thus link the incompatible objectives of integrating concerns of environmental protection with economic competitiveness and social development or Environmental policy integration (EPI). Therefore, one of the most important illustrations of SD can be observed as its emphasis on the inclusion or integration of environmental concerns or objectives into policy domains that are not related to the domain of conventional environmental policy. The claims for the inclusion or integration of environmental concerns or objectives into policy domains that are not related to the domain of conventional environmental policy are found in the thorough literature review (see, for instance, Adelle and Russel, 2013;Jordan and Lenschow, 2010;Mullally and Dunphy, 2015;Runhaar et al., 2014;Runhaar et al., 2017;Solorio, 2011;Söderberg, 2011;Uittenbroek et al., 2013;Wamsler, 2015;Wejs, 2014). Accordingly, it has been revealed explicitly that the conventional environmental policy domain is not able to achieve environmental objectives by itself, and that each policy sector must take into consideration and integrate environmental objectives if these objectives are to be achieved in any way. The literature, thus, explicitly views EPI as a necessary modification of policymaking to guide society as a whole in a more sustainable manner.
Similarly, societal issues over the energy sector are, thus, crosscutting the boundaries of established jurisdiction, namely through policy domains and governance levels. Apart from the more general literature which, for instance, illustrates that the integration of environmental concerns or objectives into energy policy is a stance upon which the European Union (EU) governs its energy-related issues (see, for instance, Solorio, 2011), the more explicit literature reveals reasons for studying the energy sector in the context of the present research: CO 2 emission reductions, economic impacts, and energy security (see, for instance, IEA, 2019). Another important reason for studying the energy sector in the context of the present research is certainly the potential of renewable electricity (RES-E) (Knudsen 2010(Knudsen , 2012.
Another important matter illustrated by the literature in general is that there is a lack of clarity on how the integration of environmental concerns or objectives into energy policy has transformed and modified energy policy. For this reason, modest levels of integrated policy-making in the energy sector, as well as in sectors policy domains that are not related to the domain of conventional environmental policy, have so far been achieved.
The present research thus aims to clarify how the integrated policymaking adds value to the diversification of environmental policy instruments and inclusion or integration of environmental concerns or objectives into the policy domain of energy.
Conceptualizing Integrated Policymaking
EPI became the most important concept in environmental governance with the publication of the Brundtland report in 1987, which defined the SD as a "development that meets the needs of the present without compromising the ability of future generations to meet their own needs" (WCED, 1987, p. 2), illustrating that the integration of economic competitiveness, social development, and environmental protection is central to the concept of SD. Following the publication of the Brundtland Report, the integration was officially recognized as a principle of international law. For instance, EPI was legally accepted by the Treaty on European Union (EU) in 1993, and incorporated into the Treaty establishing the European Community (TEC), prescribing integration of environmental protection into the EU's relevant policies and activities, most explicitly about the promotion of sustainability.
Similarly, to properly understand European integration development, it is necessary to understand the role energy has played in this process. It is rather difficult to explain the creation of the EU without taking into consideration the creation of the European Coal Organization (ECO) in 1946 and the Organization for European Economic Co-operation (OEEC) in 1948, obviously showing that energy was a foundation of European integration. A similar motivation is found behind the creation of the European Coal and Steel Community (ECSC) in 1952 and the European Atomic Energy Community (EURATOM) in 1958, constituting the basic pillars of the European Economic Community (EEC). Moreover, a similar motivation is found behind the Élysée Treaty signed in 1963 between France and Germany, aimed at reconciliation of relationship, which was later anchored in the EU. However, despite efforts, the integration process has not developed sufficiently enough to form a basis for a common energy policy. It was not until the Lisbon reform in 2009 and the Treaty on the Functioning of the European Union (TFEU) that brought the necessary changes to this sector and set the policy goals, otherwise known as energy trinity, i.e., security of supply, affordable energy, and environmental sustainability. The commencement of the "Cardiff Process" in 1998 illustrates a step forward towards the practical application of EPI with Renewable Energy Sources (RES) and energy efficiency forming the basis of a sustainable energy system, and with the 2020 Climate and Energy Package adopted in 2009 is largely viewed as the flagship instrument of the EU's forward-looking perspective on this sustainable energy model (Oberthür and Pallaemarts, 2010). The package involves a 20 percent reduction in greenhouse gas (GHG) emissions (from 1990 levels), 20 percent of energy from RES, and a 20 percent improvement in energy efficiency. Similarly, a new science-policy-society interface for the 2030 Agenda emphasizes the role of European Advisory Councils on the Environment and SD. Accordingly, the 2030 Climate and Energy Framework adopted in 2014 involves a 40 percent reduction in GHG emissions (from 1990 levels), 32 percent of energy from RES, and a 32.5 percent improvement in energy efficiency. In addition, the implementation of the 2030 Agenda at the national, sub-national, and local level necessitates a strong association between the most important actors, namely governments, the scientific community, and a broad variety of actors. The European Advisory Councils on the Environment and SD play an important role in terms of knowledge dissemination and agenda setting.
Framework for the Analysis of Integrated Policymaking
EPI is defined as "the incorporation of environmental objectives into all stages of policymaking in non-governmental policy sectors, with specific recognition of this goal as a guiding principle for the planning and execution of policy" (Lafferty and Hovden, 2003, p. 9). Furthermore, this principle should be "accompanied by an attempt to aggregate presumed environmental consequences into an overall evaluation of policy, and a commitment to minimize contradictions between environmental and sectoral policies by giving principled priority to the former over the latter" (Lafferty and Hovden, 2003, p. 9). A similar term to EPI is climate policy integration (CPI) (Jordan and Lenschow, 2010). Even though there are different meanings linked to CPI, the notion is the same as that of EPI.
EPI strategies found in the literature are relatively diverse, and range from regulatory and financial to institutional and organizational. Likely due to such a diverse nature of these strategies, an overview of EPI is virtually non-existent. Some of the most frequent examples include Strategic Environmental Assessment (SEA), Environmental Impact Assessment (EIA), Green budgeting, Green taxes, Biodiversity conservation markets, Environmental units within sectoral departments, Green departments, Combination of departments, and Environmental reporting obligations, Sustainable development strategies (SDS), National environmental plans, etc. EPI strategies also include partnerships between public and private actors, voluntary sector-wide agreements between public and private actors, voluntary sectoral self-governance, municipal voluntarism, self-organized and self-governed management. The literature also illustrates different frameworks on of how to measure levels of EPI achieved (see for instance Weber and Driessen, 2010;Dupont and Oberthür, 2012), generally distinguishing the following levels of integrated policymaking: coordination, harmonization, and prioritization, or more recent studies (see, for instance, Uittenbroek et al., 2013;Brouwer et al., 2013), generally distinguishing the following levels of integrated policymaking: inclusion, consistency, weighting, and reporting. Broadly speaking, the performance of EPI strategies could be evaluated in terms of physical indicators, such as a reduction of climate risks, CO 2 emissions, environmental quality, etc. (see Adelle and Russel, 2013). However, given that such evaluation would be rather difficult to take place, the reported levels of EPI found in the mentioned literature relate to EPI strategies that are influential in decision-making or policymaking, as well as in the implementation of these decisions or policies.
Evaluative EPI strategies are found in the literature focusing on particular policy sectors and in different countries: noise and spatial planning (Weber and Driessen, 2010), bioenergy policy (Söderberg, 2011), water policy (Brouwer et al., 2013), urban planning (Uittenbroek et al., 2013), etc. Evaluative EPI strategies are also found focusing on the performance of particular strategies: sustainable development strategies (Steurer and Hametner, 2013), green budgeting (Russel and Benson, 2013), market-based mechanisms, such as cap and trade systems, green taxes and biodiversity conservation markets (Ward and Cao, 2012;Lu et al., 2012;Alvarado -Quesada et al., 2014), etc. Furthermore, the literature points to the governance of EPI as the most important obstacle in decisionmaking or policymaking. In trade-offs, sectoral and environmental issues and concerns necessarily come into conflict and there is a lack of political will to prioritize environmental issues and concerns (see Dupont and Oberthür, 2012). The literature points to the implementation of EPI as another obstacle, because relevant institutions and bodies, such as governmental ministries, lack the resources, power, and authority to enforce EPI (see Dupont and Oberthür, 2012). Lists of factors that ease or constrain EPI are also found in more general literature: cultural, instrumental, economic, organizational, and political factors, as well as in empirical research: contextual, procedural and organizational factors (Weber and Driessen, 2010), political factors, network, skills, personal motivation, relationships between policy sectors and organizational factors, decision-making, the characteristics of the actors, the outcomes of the environmental assessment, the legal basis (Arts et al., 2012), entry and exit barriers, transaction costs, complete information, appropriate actors, and property rights (Alvarado-Quesada et al., 2014). Similarly, lists of factors that ease or constrain CPI are also identified: institutional, socio-cultural, cognitive, knowledge and informational, and financial factors (Biesbroek et al., 2013).
Policy Framing as a Method of Learning in Integrated Policymaking
EPI strategies found in the literature can also be observed from an institutional perspective (see, for instance, Wejs, 2014;Wejs and Cashmore, 2014;, in which it is emphasized that finding the appropriate discourse is necessary to gain legitimacy for EPI. According to the particular research, framed as a mechanism of socio-economic development, climate change is less difficult to integrate than framed as an environmental problem. The literature has already widely endorsed a policy-learning approach to analyzing integrated policymaking, illustrating the increasing focus on the role of a new science-policy-society interface, necessitating a strong association between the most important actors, namely governments, the scientific community, and a broad variety of actors. The implication of learning in policymaking contributes to a change in the policy process and policymaking, which can be viewed as a precondition for sustainability. Broadly speaking, learning is possible by adapting to changing conditions and acquiring new knowledge based on experience gained through the policy process. Thus, in contrast to hierarchical governance and its traditional forms of subsystem involvement, the policy-learning approach to analyzing integrated policymaking implies that policy is created in networking processes with both public and private actors that have different ideas and interests. Policy networks can, therefore, subsequently be described as informal structures, enabling communication and interaction between actors. To avoid any potential ambiguity involved with the policy-learning approach to analyzing integrated policymaking, a unifying concept of policy framing is introduced. According to Rein and Schön (1993, p. 146), "framing is a way of selecting, organizing, interpreting, and making sense of a complex reality to provide guideposts for knowing, analyzing, persuading and acting". Moreover, "a frame is a perspective from which an amorphous, ill-defined, problematic situation can be made sense of and acted on" (Rein and Schön, 1993, p. 146). Policy frames can thus help interpret environmental requirements as a mechanism toward the achievement of environmental protection and market harmonization in general. Lenschow and Zito (1998) analyze how policy frames have influenced the manner the European Community (EC) actors understand the integration of environmental and economic policy objectives. They argue that there are three EC policy frames: conditional, classic, and sustainability, illustrating how the integration of environmental and economic policy objectives takes place. The conditional environmental policy frame conceptualizes the EC environmental regulation to prevent trade distortions produced by the diverging national environmental standards rather than to create a European environmental policy regime. The conditional environmental policy frame thus assumes that environmental and economic policy function independently from one another, and that policymaking must be necessarily viewed as a choice between incompatible objectives of environmental and economic policy. Several regulatory features follow such an assumption, and are observable in an explicitly hierarchical structure of administration, according to which EC is distinguished as the primary actor in the economy to create harmonized market conditions. According to this frame, environmental regulations and standards are applied uniformly within EU member states, which implies the use of command-and-control policy instruments. The classic environmental policy frame conceptualizes the EU environmental regulation to increase environmental awareness in terms of limiting safety, health, and environmental risks, requiring policy compromise. Environmental and economic policies continue to function independently from one another, however. The EU is still viewed as the primary actor in the market aiming to create harmonized market conditions with command-and-control policy instruments remaining uniformly applied within EU member states. The sustainability policy frame no longer assumes that environmental and economic policy function independently from one another, and acknowledges policy compromises as a necessary condition for long-term economic development. Decision-making or policymaking thus applies the principle of partnership and integrated policymaking, which prescribe the internalization of environmental costs in market transactions, i.e., economic policy instruments, self-regulation, learning tools, etc.
Policy Instruments and Integrated Policymaking
The principle of integrated policymaking thus suggests the internalization of environmental costs in market transactions, i.e., economic policy instruments. Economic policy instruments or market-based policy instruments are mechanisms that recommend conduct through explicit market indications. As such, market-based policy instruments prescribe both firms and individuals undertaking pollution control efforts that are in their interest, and also to fulfill policy requirements. Market-based policy instruments of the sustainability policy frame, therefore, contrast other environmental regulations and standards of the conditional and classic policy frames, which are applied uniformly within EU member states. Environmental regulations and standards with a uniform application are inclined to force firms to assume similar shares of the pollution control burden, irrespective of the cost, which can be expensive and counterproductive. In contrast, by providing incentives for the reductions in pollution by those firms which can achieve those reductions in the cheapest possible manner, market-based policy instruments allow any desired level of pollution control effort to be achieved at the lowest possible cost for society. Market-based policy instruments thus have the potential to provide powerful incentives for firms to obtain cheaper and improved pollution-control technologies. Market-based policy instruments can be considered within four broad categories: government subsidy reductions, market friction reductions, tradable permits, and pollution charges.
Within the context of market-based policy instruments, the literature illustrates an increasing interest in the design of subsidy regimes to accommodate the adequate development of RES-E. Accordingly, subsidy regimes can be considered within two broad approaches: direct and indirect (Batlle et al., 2011). Direct approaches refer to investment support, such as support mechanisms, tax reductions capital grants, etc. A feed-in tariff (FIT) is an example of a direct approach and support mechanism. Indirect approaches refer to institutional support mechanisms and implicit payments, such as positive discriminatory rules, below-cost provision of infrastructure, funding of research and development, etc.
The Development of the Research Hypotheses
The present research has obtained the research hypothesis based on the literature review. The main research hypothesis can be summarized as follows:
H1:
The higher the extent of inclusion of environmental policy instruments within relevant policy domains, the higher the extent of diversification of environmental policy instruments.
H2:
The higher the extent of inclusion of environmental policy instruments within relevant policy domains and structures that coordinate and monitor efforts within relevant policy domains, the higher the extent of diversification of environmental policy instruments.
H3:
The higher the extent of inclusion of environmental policy instruments within relevant policy domains ultimately resulting in cross-sectoral instrument blend, the higher the extent of diversification of environmental policy instruments. Sekaran and Bouqie (2009, p.3) define business research as "organized, systematic, data-based, critical objective, specific inquiry or investigation into a specific problem, undertaken to find answers or solutions to it." The present research can thus be understood as a process of finding a solution to an issue after a thorough study and analysis of the various actors.
Research Designs
Given that there are no earlier studies or analyses to refer to the subject, the present research will, therefore, look for patterns and ideas to create hypotheses. The hypotheses will be tested and confirmed. It is expected that the hypotheses will not only provide conclusive answers to the subject, but also provide guidance on what future research should be conducted.
The present research will use a survey as a research technique. The present research will use e-post, on-line, face-to-face, group distribution, and individual distribution as data collection methods. The survey was taken from January 15 th , 2020 to February 15 th , 2020. A sample for the data collection includes EU institutions and bodies (namely, the European Council and the European Commission), academic institutions (public and private universities), and private investors (investment funds and venture capitalists). Approximately 700 questionnaires were distributed in order to obtain the desired number of 100 confirmed respondents. The present research will also use descriptive statistics to summarize the data in a more compact and graphical form.
Furthermore, the present research is designed to apply its findings to solve a specific problem. It represents the application of existing knowledge to the improvement of public policies and managerial practices.
Finally, the present research is research in which a conceptual and theoretical structure is developed, which is then tested by empirical observation. Particular instances are, thus, deducted from general inferences.
H1:
The higher the extent of inclusion of environmental policy instruments within relevant policy domains, the higher the extent of diversification of environmental policy instruments. Approximately 700 questionnaires were distributed in order to obtain the desired number of 100 confirmed respondents. The statistics show that 27 percent of the respondents strongly agree that an increase in the extent of inclusion of environmental policy instruments within relevant policy domains increases to the extent of diversification of environmental policy instruments. Accordingly, another 41 percent of the respondents agree with said statement, followed by 17 percent, who neither agree nor disagree, 7 percent who disagree, and 8 percent who strongly disagree.
H2:
The higher the extent of inclusion of environmental policy instruments within relevant policy domains and structures that coordinate and monitor efforts within relevant policy domains, the higher the extent of diversification of environmental policy instruments. Approximately 700 questionnaires were distributed in order to obtain the desired number of 100 confirmed respondents. The statistics illustrate that 31 percent of the respondents strongly agree that an increase in the extent of inclusion of environmental policy instruments within relevant policy domains and structures that coordinate and monitor efforts within relevant policy domains increases to the extent of diversification of environmental policy instruments. Another 39 percent of the respondents agree with said claim, followed by 10 percent who neither agree nor disagree, 12 percent who disagree, and 8 percent who strongly disagree.
H3:
The higher the extent of inclusion of environmental policy instruments within relevant policy domains ultimately resulting in cross-sectoral instrument blend, the higher the extent of diversification of environmental policy instruments. Approximately 700 questionnaires were distributed in order to obtain the desired number of 100 confirmed respondents. The statistics display that 25 percent of the respondents strongly agree that an increase in the extent of inclusion of environmental policy instruments within relevant policy domains ultimately resulting in a cross-sectoral instrument blend results in the increase in the extent of diversification of environmental policy instruments. Accordingly, 44 percent of the respondents agree with said claim, followed by another 14 percent who neither agree nor disagree, 9 percent who disagree, and 8 percent who strongly disagree.
CONCLUSION
Most severe societal issues are crosscutting the boundaries of established jurisdiction through policy domains and governance levels. Even though there are increasing requirements for integrated policymaking of these issues, obstacles for integrated policymaking are making it increasingly difficult to achieve it, particularly when these societal issues are confronted with hierarchical governance and its traditional forms of subsystem involvement. To add to the complexity of integrated policymaking, the number of actors involved in the policymaking has increased.
Most severe societal issues that are crosscutting the boundaries of established jurisdiction connect the incompatible objectives of economic competitiveness, social development, and environmental protections with the concept of SD, and link the incompatible objectives of integrating concerns of environmental protection with economic competitiveness and social development or EPI. Therefore, one of the most important illustrations of SD is the inclusion or integration of environmental concerns or objectives into policy domains that are not related to the domain of conventional environmental policy, i.e., EPI. The necessity for the inclusion or integration of environmental concerns or objectives into policy domains that are not related to the domain of conventional environmental policy are found in the thorough literature review, which reveals the inability of the conventional environmental policy domain to achieve environmental objectives by itself, and that each policy sector must take into consideration and integrate environmental objectives if they are anyhow to be achieved. The literature, therefore, views EPI as a necessary modification of policymaking to guide society as a whole in a more sustainable manner.
In a similar manner, societal issues over the energy sector are crosscutting the boundaries of established jurisdiction, namely through policy domains and governance levels. The literature reveals that, within the context of the present research, the energy sector is very important due to how the EU governs its energy-related issues, as well as due to more explicit reasons, such as the CO 2 emission reductions, economic impacts, and energy security, and to the potential of RES-E.
The lack of clarity on how the integration of environmental concerns or objectives into energy policy has transformed and modified energy policy is the main reason behind the modest levels of integrated policy-making in the energy sector achieved. The present research thus attempts to clarify how the integrated policymaking adds value to the diversification of environmental policy instruments and inclusion or integration of environmental concerns or objectives into the policy domain of energy.
Accordingly, in H1, 68 percent of the respondents either strongly agree or agree that an increase in the extent of inclusion of environmental policy instruments within relevant policy domains increases to the extent of diversification of environmental policy instruments. In H2, 70 percent of the respondents either strongly agree or agree that an increase in the extent of inclusion of environmental policy instruments within relevant policy domains and structures that coordinate and monitor efforts within relevant policy domains increases to the extent of diversification of environmental policy instruments. In H3, 69 percent of the respondents strongly agree that an increase in the extent of inclusion of environmental policy instruments within relevant policy domains ultimately resulting in a cross-sectoral instrument blend results in the increase in the extent of diversification of environmental policy instruments.
Within the context of the present research, any attempt to diversify environmental policy instruments can be seen from the perspective of integrated policymaking as a deliberate and structured effort by policymakers to deal with the policy issue by modifying actions of the integrated policymaking. | 6,029.8 | 2020-01-01T00:00:00.000 | [
"Political Science",
"Economics"
] |
Nanofluid bioconvection in water-based suspensions containing nanoparticles and oxytactic microorganisms: oscillatory instability
The aim of this article is to propose a novel type of a nanofluid that contains both nanoparticles and motile (oxytactic) microorganisms. The benefits of adding motile microorganisms to the suspension include enhanced mass transfer, microscale mixing, and anticipated improved stability of the nanofluid. In order to understand the behavior of such a suspension at the fundamental level, this article investigates its stability when it occupies a shallow horizontal layer. The oscillatory mode of nanofluid bioconvection may be induced by the interaction of three competing agencies: oxytactic microorganisms, heating or cooling from the bottom, and top or bottom-heavy nanoparticle distribution. The model includes equations expressing conservation of total mass, momentum, thermal energy, nanoparticles, microorganisms, and oxygen. Physical mechanisms responsible for the slip velocity between the nanoparticles and the base fluid, such as Brownian motion and thermophoresis, are accounted for in the model. An approximate analytical solution of the eigenvalue problem is obtained using the Galerkin method. The obtained solution provides important physical insights into the behavior of this system; it also explains when the oscillatory mode of instability is possible in such system.
Introduction
The term "nanofluid" was coined by Choi in his seminal paper presented in 1995 at the ASME Winter Annual Meeting [1]. It refers to a liquid containing a dispersion of submicronic solid particles (nanoparticles) with typical length on the order of 1-50 nm [2]. The unique properties of nanofluids include the impressive enhancement of thermal conductivity as well as overall heat transfer [3][4][5][6][7]. Various mechanisms leading to heat transfer enhancement in nanofluids are discussed in numerous publications; see, for example [8][9][10][11][12].
Wang [13][14][15] pioneered in developing the constructal approach, created by Bejan [16][17][18][19], for designing nanofluids. Nanofluids enhance the thermal performance of the base fluid; the utilization of the constructal theory makes it possible to design a nanofluid with the best microstructure and performance within a specified type of microstructures.
Recent publications show significant interest in applications of nanofluids in various types of microsystems. These include microchannels [20], microheat pipes [21], microchannel heat sinks [22], and microreactors [23]. There is also significant potential in using nanomaterials in different bio-microsystems, such as enzyme biosensors [24]. In [25], the performance of a bioseparation system for capturing nanoparticles was simulated. There is also strong interest in developing chip-size microdevices for evaluating nanoparticle toxicity; Huh et al. [26] suggested a biomimetic microsystem that reconstitutes the critical functional alveolar-capillary interface of the human lung to evaluate toxic and inflammatory responses of the lung to silica nanoparticles.
The aim of this article is to propose a novel type of a nanofluid that contains both nanoparticles and oxytactic microorganisms, such as a soil bacterium Bacillus subtilis. These particular microorganisms are oxygen consumers that swim up the oxygen concentration gradient. There are important similarities and differences between nanoparticles and motile microorganisms. In their impressive review of nanofluids research, Wang and Fan [27] pointed out that nanofluids involve four scales: the molecular scale, the microscale, the macroscale, and the megascale. There is interaction between these scales. For example, by manipulating the structure and distribution of nanoparticles the researcher can impact macroscopic properties of the nanofluid, such as its thermal conductivity. Similar to nanofluids, in suspensions of motile microorganisms that exhibit spontaneous formation of flow patterns (this phenomenon is called bioconvection) physical laws that govern smaller scales lead to a phenomenon visible on a larger scale. While superfluidity and superconductivity are quantum phenomena visible at the macroscale, bioconvection is a mesoscale phenomenon, in which the motion of motile microorganisms induces a macroscopic motion (convection) in the fluid. This happens because motile microorganisms are heavier than water and they generally swim in the upward direction, causing an unstable top-heavy density stratification which under certain conditions leads to the development of hydrodynamic instability. Unlike motile microorganisms, nanoparticles are not self-propelled; they just move due to such phenomena as Brownian motion and thermophoresis and are carried by the flow of the base fluid. On the contrary, motile microorganisms can actively swim in the fluid in response to such stimuli as gravity, light, or chemical attraction. Combining nanoparticles and motile microorganisms in a suspension makes it possible to use benefits of both of these microsystems.
One possible application of bioconvection in biomicrosystems is for mass transport enhancement and mixing, which are important issues in many microsystems [28,29]. Also, the results presented in [30] suggest using bioconvection in a toxic compound sensor due to the ability of some toxic compounds to inhibit the flagella movement and thus suppress bioconvection. Also, preventing nanoparticles from agglomerating and aggregating remains a significant challenge. One of the reasons why this is challenging is because although inducing mixing at the macroscale is easy and can be achieved by stirring, inducing and controlling mixing at the microscale is difficult. Bioconvection can provide both types of mixing. Macroscale mixing is provided by inducing the unstable density stratification due to microorganisms' upswimming. Mixing at the microscale is provided by flagella (or flagella bundle) motion of individual microorganisms. Due to flagella rotation, microorganisms push fluid along their axis of symmetry, and suck it from the sides [31]. While the estimates given in [32] show that the stresslet stress produced by individual microorganisms have negligible effect on macroscopic motion of the fluid (which is rather driven by the buoyancy force induced by the top-heavy density stratification due to microorganisms' upswimming), the effect produced by flagella rotation is not negligible on the microscopic scale (on the scale of a microorganism and a nanoparticle).
In order to use suspensions containing both nanoparticles and motile microorganisms in microsystems, the behavior of such suspensions must be understood at the fundamental level. Bio-thermal convection caused by the combined effect of upswimming of oxytacic microorganisms and temperature variation was investigated in [33][34][35][36]. Bioconvection in nanofluids is expected to occur if the concentration of nanoparticles is small, so that nanoparticles do not cause any significant increase of the viscosity of the base fluid. The problem of bioconvection in suspensions containing small solid particles (nanoparticles) was first studied in [37][38][39][40][41] and then recently in [42]. Non-oscillatory bioconvection in suspensions of oxytactic microorganisms was considered in Kuznetsov AV: Nanofluid bioconvection: Interaction of microorganisms oxytactic upswimming, nanoparticle distribution and heating/cooling from below. Theor Comput Fluid Dyn 2010, submitted. This article extends the theory to the case of oscillatory convection in suspensions containing both nanoparticles and oxytactic microorganisms.
Governing equations
The governing equations are formulated for a waterbased nanofluid containing nanoparticles and oxytactic microorganisms. The nanofluid occupies a horizontal layer of depth H. It is assumed that the nanoparticle suspension is stable. According to Choi [2], there are methods (including suspending nanoparticles using either surfactant or surface charge technology) that lead to stable nanofluids. It is further assumed that the presence of nanoparticles has no effect on the direction of microorganisms' swimming and on their swimming velocity. This is a reasonable assumption if the nanoparticle suspension is dilute; the concentration of nanoparticles has to be small anyway for the bioconvectioninduced flow to occur (otherwise, a large concentration of nanoparticles would result in a large suspension viscosity which would suppress bioconvection).
In formulating the governing equations, the terms pertaining to nanoparticles are written using the theory developed in Buongiorno [43], while the terms pertaining to oxytactic microorganisms are written using the approach developed by Hillesdon and Pedley [44,45].
The continuity equation for the nanoparticle-microorganism suspension considered in this research is where U = (u,v,w) is the dimensionless nanofluid velocity, defined as U*H/a f ; U* is the dimensional nanofluid velocity; a f is the thermal diffusivity of a nanofluid, k/ (rc) f ; k is the thermal conductivity of the nanofluid; and (rc) f is the volumetric heat capacity of the nanofluid.
The dimensionless coordinates are defined as (x,y,z) = (x*, y*, z*)/H, where z is the vertically downward coordinate.
The buoyancy force can be considered to be made up of three separate components that result from: the temperature variation of the fluid, the nanoparticle distribution (nanoparticles are heavier than water), and the microorganism distribution (microorganisms are also heavier than water). Utilizing the Boussinesq approximation (which is valid because the inertial effects of the density stratification are negligible, the dominant term multiplying the inertia terms is the density of the base fluid that exceeds by far the density stratification), the momentum equation can be written as: where k is the vertically downward unit vector.
The dimensionless variables in Equation 2 are defined as: where t is the dimensionless time, p is the dimensionless pressure, is the relative nanoparticle volume fraction, T is the dimensionless temperature, n is the dimensionless concentration of microorganisms, t* is the time, p * is the pressure, μ is the viscosity of the suspension (containing the base fluid, nanoparticles and microorganisms), * is the nanoparticle volume fraction, 0 is the nanoparticle volume fraction at the lower wall, 1 is the nanoparticle volume fraction at the upper wall, T* is the nanofluid temperature, T c is the temperature at the upper wall (also used as a reference temperature), T h is the temperature at the lower wall, n* is the concentration of microorganisms, and n 0 is the average concentration of microorganisms (concentration of microorganisms in a well-stirred suspension). The dimensionless parameters in Equation 2, namely, the Prandtl number, Pr; the basic-density Rayleigh number, Rm; the traditional thermal Rayleigh number, Ra; the nanoparticle concentration Rayleigh number, Rn; the bioconvection Rayleigh number, Rb; and the bioconvection Lewis number, Lb, are defined as follows: where r f0 is the base-fluid density at the reference temperature; r p is the nanoparticle mass density; g is the gravity; b is the volumetric thermal expansion coefficient of the base fluid; Δr is the density difference between microorganisms and a base fluid, r mo -r f0 ; r mo is the microorganism mass density; θ is the average volume of a microorganism; and D mo is the diffusivity of microorganisms (in this model, following [44,45], all random motions of microorganisms are simulated by a diffusion process).
The conservation equation for nanoparticles contains two diffusion terms on the right-hand side, which represent the Brownian diffusion of nanoparticles and their transport by thermophoresis (a detailed derivation is available in [43,46]): In Equation 6, the nanoparticle Lewis number, Ln, and a modified diffusivity ratio, N A (this parameter is somewhat similar to the Soret parameter that arises in crossdiffusion phenomena in solutions), are defined as: where D B is the Brownian diffusion coefficient of nanoparticles and D T is the thermophoretic diffusion coefficient.
The right-hand side of the thermal energy equation for a nanofluid accounts for thermal energy transport by conduction in a nanofluid as well as for the energy transport because of the mass flux of nanoparticles (again, a detailed derivation is available in [43,46]): In Equation 8, N B is a modified particle-density increment, defined as: where (rc) p is the volumetric heat capacity of the nanoparticles.
The right-hand side of the equation expressing the conservation of microorganisms describes three modes of microorganisms transport: due to macroscopic motion (convection) of the fluid, due to self-propelled directional swimming of microorganisms relative to the fluid, and due diffusion, which approximates all stochastic motions of microorganisms: where V is the dimensionless swimming velocity of a microorganism, V*H/a f , which is calculated as [44,45]: In Equation 11 Ĥ is the Heaviside step function and C is the dimensionless oxygen concentration, defined as: where C* is the dimensional oxygen concentration, C 0 is the upper-surface oxygen concentration (the upper surface is assumed to be open to atmosphere), and C min is the minimum oxygen concentration that microorganisms need to be active. Equation 11 thus assumes that microorganisms swim up the oxygen concentration gradient and that their swimming velocity is proportional to that gradient; however, in order for microorganisms to be active the oxygen concentration need to be above C min . Since this article deals with a shallow layer situation, it is assumed that C C min throughout the layer thickness, and the Heaviside step function, H Ĉ , in Equation 11 is equal to unity.
Also, the bioconvection Péclet number, Pe, in Equation 11 is defined as: (13) where b is the chemotaxis constant (which has the dimension of length) and W mo is the maximum swimming speed of a microorganism (the product bW mo is assumed to be constant).
Finally, the oxygen conservation equation is: The first term on the right-hand side of Equation 14 represents oxygen diffusion, while the second term represents oxygen consumption by microorganisms.
The new dimensionless parameters in Equation 14 are where Le is the traditional Lewis number, ^i s the dimensionless parameter describing oxygen consumption by the microorganisms, D S is the diffusivity of oxygen, and g is a dimensional constant describing consumption of oxygen by the microorganisms.
According to Hillesdon and Pedley [45], the layer can be treated as shallow as long as the following condition is satisfied: Equation 16 gives the maximum layer depth for which the oxygen concentration at the bottom does not drop below C min .
The boundary conditions for Equations 1, 2, 6, 8, 10, and 14 are imposed as follows. It is assumed that the temperature and the volumetric fraction of the nanoparticles are constant on the boundaries and the flux of microorganisms through the boundaries is equal to zero. The lower boundary is always assumed rigid and the upper boundary can be either rigid or stress-free. The boundary conditions for case of a rigid upper wall are The fifth equation in (18) is equivalent to the statement that the total flux of microorganisms at the upper surface is equal to zero: the microorganisms swim vertically upward at the top surface but (because their concentration gradient at the top surface is directed vertically upward) they are simultaneously pushed downward by diffusion; the two fluxes are equal but opposite in direction).
If the upper surface is stress-free, the second equation in (18) is replaced with the following equation:
Basic state
The solution for the basic state corresponds to a timeindependent quiescent situation. The solution is of the following form: In this case, the solution of Equations 6, 8, 10, and 14 subjects to boundary conditions (17) and (18) is (the particular form of hydrodynamic boundary conditions at the upper surface is not important because the solution in the basic state is quiescent): where A 1 is the smallest positive root of the transcendental equation The solutions given by Equations 23 and 24 were first reported in [44].
The pressure distribution in the basic state, p b (z), can then be obtained by integrating the following form of the momentum equation (which follows from Equation 2): Equations 21 and 22 can be simplified if characteristic parameter values for a typical nanofluid are considered. Based on the data presented in Buongiorno [43] for an alumina/water nanofluid, the following dimensional para-
Linear instability analysis
Perturbations are superimposed on the basic solution, as follows: Equation 29 is then substituted into Equations 1, 2, 6, 8, 10, and 14, the resulting equations are linearized and the use is made of Equations 27 and 28. This procedure results in the following equations for the perturbation quantities: Equations 30 to 35 are independent of Rm since this parameter is just a measure of the basic static pressure gradient. In order to eliminate the pressure and horizontal components of velocity from Equations 30 and 31, Equation 31 (see [46]) is operated with k curl curl and the use is made of the identity curl curl ≡ grad div -∇ 2 together with Equation 30. This results in the reduction of Equations 30 and 31 to the following scalar equation which involves only one component of the perturbation velocity, w': where H 2 is the two-dimensional Laplacian operator in the horizontal plane and ∇ 4 w' is the Laplacian of the Laplacian of w'. Equations 17 and 18 then lead to the following boundary conditions for the perturbation quantities for the case when both the lower and upper walls are rigid: If the upper boundary is stress-free, the second equation in Equation 38 is replaced by The method of normal modes is used to solve a linear boundary-value problem composed of differential Equations 32 to 36 and boundary conditions (37), (38) (or (39)). A normal mode expansion is introduced as: where the function f(x,y) satisfies the following equation: where Equation 25 for A 1 is reduced to In Equations 42 to 46 s is a dimensionless growth factor; for neutral stability the real part of s is zero, so it is written s = iω, where ω is a dimensionless frequency (it is a real number).
For the case of rigid-rigid walls, the boundary conditions for the amplitudes are If the upper surface is stress-free, the second equation in (49) is replaced by
Rigid-rigid boundaries
For the case of the rigid-rigid boundaries the utilization of a standard Galerkin procedure (see, for example [47]), which involves substituting the trial functions given by Equation 51 into Equations 42 to 46, calculating the residuals, and making the residuals orthogonal to the relevant trial functions, results in the following eigenvalue equation relating three Rayleigh numbers, Ra, Rn, and Rb: where functions F 1 , F 2 , F 3 , and F 4 are given in the appendix [see Equations A1 to A4], they depend on Lb, Le, Ln, Pr, N A , ϖ, ω, and m. It is interesting that Equation 54 is independent of N B at this order (one-term Galerkin) of approximation.
In order to evaluate the accuracy of the one-term Galerkin approximation used in obtaining Equation 54 the accuracy of this equation is estimated for the case of non-oscillatory instability (which corresponds to ω = 0) for the situation when the suspension contains no microorganisms (this corresponds to n 0 0 , which leads to Rb = 0) and no nanoparticles (this leads to Rn = 0). The right-hand side of Equation 55 takes the minimum value of 1750 at m c = 3.116; the obtained critical value of Ra is 2.5% greater than the exact value (1707.762) for this problem reported in [48]. The corresponding critical value of the wavenumber is 0.03% smaller than the exact value (3.117) reported in [48].
In this limiting case Equation 54 collapses to
Based on the data presented in [44,45] for soil bacterium Bacillus subtilis, the following parameter values for these microorganisms are used: D m = 1.3 × 10 -10 m 2 /s, D s = 2.12 × 10 -9 m 2 /s, Δr = 100 kg/m 3 , n 0 15 10 * cells/m 3 , θ = 10 -18 m 3 , and H = 2.5 × 10 -3 m (or 2.5 mm, this is a typical depth of a shallow layer; this size is also typical for a microdevice). Also, according to Hillesdon et al. [45], for Bacillus subtilis dimensionless parameters can be estimated as follows: Pe = 15H, ^/ 7 2 H Le , where the layer depth, H, must be given in mm. Based on [43], the following parameter values for a typical alumina/water nanofluid are utilized: , r f0 = 10 3 kg/m 3 , r p = 4 × 10 3 kg/m 3 , (rc) p = 3.1 × 10 6 J/m 3 , a f = 2 × 10 -7 m 2 /s, D B = 4 × 10 -11 m 2 /s, D T = 6 × 10 -11 m 2 /s, and μ = 10 -3 Pas. It is also assumed that For Figure 1a,b,c, the following values of dimensionless parameters are utilized: Lb = 1500, Le = 94, Ln = 5000, Pr = 5, N A = 5, ϖ = 17, and Rb = 0 (which corresponds to the situation with zero concentration of microorganisms). Rn is changing in the range between -1.2 and 1.2. In Figure 1a, the boundary for non-oscillatory instability (shown by a solid line) is obtained by setting ω to zero in Equation 54, solving this equation for Ra and then finding the minimum with respect to m of the right-hand side of the obtained equation. The boundary for oscillatory instability (shown by a dotted line) is obtained by the following procedure. Two coupled equations are produced by taking the real and imaginary parts of Equation 54. One of these equations is used to eliminate ω, and the resulting equation is then solved for Ra; the critical value of Ra is again obtained by calculating the minimum value that the expression for Ra takes with respect to m. Figure 1a shows that for Rb = 0 the curve representing the instability boundary for non-oscillatory convection (solid line) is a straight line in the (Ra c , Rn) plane. Rn is defined in Equation 5 in such a way that positive Rn corresponds to a top-heavy nanoparticle distribution. Therefore, the increase of Rn produces the destabilizing effect and reduces the critical value of Ra. A comparison between instability boundaries for non-oscillatory (solid line) and oscillatory (dotted line) cases indicates that in order for the oscillatory instability to occur, Rn generally must be negative, which corresponds to a bottom-heavy (stabilizing) nanoparticle distribution. In this case the destabilizing effect of the temperature gradient (positive Ra corresponds to heating from the bottom) and destabilizing effect from upswimming of oxytactic microorganisms compete with the stabilizing effect of the nanoparticle distribution. Figure 1b shows that the critical value of the wavenumber, m c , is independent of Rn and for the case displayed in Figure 1a (Rb = 0) is equal to 3.116; also, it is almost independent of the mode of instability (nonoscillatory versus oscillatory). Figure 1c shows the square of the oscillation frequency, ω 2 , versus the nanoparticle concentration Rayleigh number, Rn. The value of ω 2 for the oscillatory instability boundary is obtained by eliminating Ra from the two coupled equations resulting from taking the real and imaginary parts of Equation 54 and solving the resulting equation for ω 2 . The solution is presented in terms of ω 2 rather than ω because the resulting equation is bi-quadratic in ω. For oscillatory instability to occur, ω 2 must be positive so that ω is real. Figure 1c shows that for Rb = 0 ω is real when Rn is negative. Figure 2a,b,c is computed for the same parameter values as Figure 1a,b,c, but now with Rb = 120000. Figure 2a,b,c thus shows the effect of microorganisms. By comparing Figure 2a with 1a, it is evident that the presence of microorganisms produces the destabilizing effect and reduces the critical value of Ra. For example, at (N A + Ln) Rn = -5000 in Figure 1a the value of Ra c corresponding to the non-oscillatory instability boundary is 6750 and in Figure 2a the corresponding value of Ra c is 6437. At (N A + Ln) Rn = 5000 in Figure 1a the value of Ra c corresponding to the non-oscillatory instability boundary is -3250 and in Figure 2a the corresponding value of Ra c is -3563. The destabilizing effect of oxytactic microorganisms is explained as follows. These microorganisms are heavier than water and on average they swim in the upward direction. Therefore, the presence of microorganisms produces a top-heavy density stratification and contributes to destabilizing the suspension.
The comparison of Figure 2b with 1b shows that the presence of microorganisms increases the critical wavenumber (in Figure 1b it was 3.116 and in Figure 2b it is 3.441). Figure 2c brings an interesting insight. Apparently, if the concentration of microorganisms is above a certain value, the oscillatory mode of instability is not possible. Indeed, ω 2 in Figure 2c is negative for the whole range of Rn (-1.2 ≤ Rn ≤ 1.2) used for computing this figure. This means that ω is imaginary and oscillatory instability does not occur for the value of Rb used in computing Figure 2.
Rigid-free boundaries
For the case when the upper boundary is stress-free, the eigenvalue equation is where functions F 5 , F 6 , F 7 , and F 8 are given in the appendix [see Equations A10 to A13].
Again, to evaluate of the accuracy of the one-term Galerkin approximation in this case, the accuracy of The right-hand side of Equation 57 takes the minimum value of 1139 at m c =2.670; the obtained value of Ra c is 3.48% greater than the exact value (1100.65) for this problem reported in [48]. The corresponding critical value of the wavenumber is 0.45% smaller than the exact value (2.682) reported in [48].
For Figures 3a,b,c and 4a,b,c, which show the results for the rigid-free boundaries, the same parameter values as for Figures 1 and 2 are utilized. Figure 3a, which is computed for Rb = 0 (no microorganisms), shows boundaries of non-oscillatory and oscillatory instabilities. This figure is similar to Figure 1a, but since now the case of the rigid-free boundaries is considered, the values of the critical Rayleigh number in Figure 3a are smaller than those in Figure 1a. Again, the comparison between the non-oscillatory and oscillatory instability boundaries indicates that in order for oscillatory instability to occur Rn must be negative; in this case at the instability boundary the effect of the nanoparticle distribution is stabilizing and the effect of the temperature gradient is destabilizing; the presence of these two competing agencies makes the oscillatory instability possible.
The critical wavenumber shown in Figure 3b (m c = 2.670) is smaller than the corresponding critical wavenumber for the rigid-rigid boundaries shown in Figure 1b. Again, it is independent of Rn and almost independent of the mode of instability (non-oscillatory versus oscillatory). Figure 3c, similar to Figure 1c, shows that ω is real when Rn is negative, which means that for negative values of Rn oscillatory instability is indeed possible. Figure 4a,b,c shows the results for rigid-free boundaries computed with Rb = 120000, meaning that the difference with Figure 3a,b,c is the presence of microorganisms. As in the case with rigid-rigid boundaries, the presence of microorganisms produces a destabilizing effect and reduces the critical value of the Rayleigh number (compare Figures 4a and 3a).
Also, the presence of microorganisms increases the critical value of the wavenumber (compare Figures 4b and 3b).
Conclusions
The possibility of oscillatory mode of instability in a nanofluid suspension that contains oxytactic microorganisms is concentration in the suspension is larger. The concentration of microorganisms is measured by the bioconvection Rayleigh number, Rb, which by definition is always nonnegative (the zero value of Rb corresponds to a suspension with no microorganisms). The increase of Rb thus destabilizes the suspension. It is also shown that the presence of microorganisms increases the critical wavenumber.
The effect of the temperature distribution can be either stabilizing (heating from the top, negative thermal Rayleigh number Ra) or destabilizing (heating from the bottom, positive Ra). The effect of nanoparticles can also be stabilizing (bottom-heavy nanoparticle distribution, negative nanoparticle concentration Rayleigh number Rn) or destabilizing (top-heavy nanoparticle distribution, positive Rn).
The results obtained in this article indicate that in order for the oscillatory instability to occur, Rn generally must be negative, which corresponds to a bottom-heavy (stabilizing) nanoparticle distribution. In this case the destabilizing effect of the temperature gradient (positive Ra) and destabilizing effect from upswimming of oxytactic microorganisms compete with the stabilizing effect of the nanoparticle distribution.
In order for the oscillatory mode of instability to occur, the dimensionless oscillation frequency, ω, must be real. Since increasing Rb pushes ω 2 to negative values, oscillatory instability is possible only if the concentration of microorganisms is below a certain value.
The results for the rigid-rigid and rigid-free boundaries are similar, but the critical Rayleigh number for the rigid-free boundaries is smaller. The critical wavenumber for the rigid-free boundaries can be either smaller or larger, depending on the concentration of microorganisms. For Rb = 0 the critical wavenumber is smaller for the rigid-free boundaries but for Rb = 120000 it is larger than for the rigid-rigid boundaries. | 6,577.2 | 2011-01-25T00:00:00.000 | [
"Physics",
"Engineering"
] |
The Effect of Ni2+ Ions Substitution on Structural, Morphological, and Optical Properties in CoCr2O4 Matrix as Pigments in Ceramic Glazes
The structural, morphological, and optical properties of Ni2+ ions substitution in CoCr2O4 matrix as ceramic pigments were investigated. The thermal decomposition of the dried gel was performed aiming to understand the mass changes during annealing. The X-ray diffraction (XRD) studies reveal a spinel-type Face–Centered Cubic structure and a secondary Cr2O3 phase when x ≤ 0.75 and a Body–Centered Tetragonal structure when x = 1. Fourier Transform Infrared Spectroscopy (FT–IR) indicated two strong absorption bands corresponding to the metal–oxygen stretching from tetrahedral and octahedral sites, characteristic of spinel structure. Ultraviolet–Visible (UV–Vis) spectra exhibited the electronic transitions of the Cr2+ Cr3+ and Ni2+ ions. From the UV–Vis data, the CIE color coordinates, (x, y) of the pigments were evaluated. The morphology was examined by Scanning Electron Microscopy (SEM) and Transmission Electron Microscopy (TEM) showing the agglomeration behavior of the particles. The stability, coloring properties and potential ceramic applications of studied pigments were tested by their incorporation in matte and glossy tile glazes followed by the application of obtained glazes on ceramic tiles. This study highlights the change in pigment color (from turquoise to a yellowish green) with Ni2+ ions substitution in the CoCr2O4 spinel matrix.
Introduction
The color of ceramic tiles, followed by the design and the technical properties, is one of the most appreciated characteristics by the customer. Thereby, there is a real need in diversifying the ceramic color palette. In addition, the use of the newer technologies for obtaining easier, faster, and qualitative colored models on tiles keeps the research in the field of ceramic pigments open. Nowadays, the demand is still on in finding new materials suitable for the challenge created by the digital decoration ink-jet printing for ceramic tiles. This application allows the use of colloidal suspensions of pigments to improve decorative features of ceramic parts and other ceramic tiles [1].
The ceramic pigments are inorganic materials which have high thermal and chemical stability. These properties allow them to be subjected to the processing conditions without losing their coloring characteristics.
Spinel compounds have a huge contribution in the field of ceramic pigments to obtain different colors like red, pink, brown, gray, or blue.
Spinels with the general formula AB 2 O 4 represent a large family of inorganic compounds which are now used in different industrial branches as ceramic pigments [2][3][4][5], used as a pH regulator. The sucrose and pectin used to support the condensation reaction were commercial food grade.
Synthesis of Pigments
Concentrated precursor solutions were prepared using the metal salts (Co(NO 3 ) 2 ·6H 2 O, Ni(CH 3 COO) 2 ·4H 2 O and (NH 4 ) 2 Cr 2 O 7 ) dissolved in distilled water. Separately, another solution containing the sucrose with the ratio sucrose: grams of final oxide (wt.:wt.) of 2:1 was prepared. The sol-gel process starts when mixing the precursor solutions with sucrose solution and pectin to form a gel. The addition of sucrose and pectin to the precursor solution containing the metal cations forms a polymer matrix in which the metal cations are distributed through the polymeric network structure [20]. This mechanism, and the role played by sucrose and pectin in the formation of the oxide structures, is discussed in more detail in reference [20].
The hydrolysis process occurs during the vigorous magnetic stirring (1000 RPM) of the metal solutions under strict temperature control (at 40-45 • C) and with a pH correction to around 1-1.5. After stirring, the obtained sol is left to age for 24 h at 80 • C to ensure the formation of the gel lattice, with the elimination of the water present in the pores and the final formation of a porous structure. The thermal treatment of the dried gels was performed in an electric furnace, in porcelain crucibles. The furnace temperature had an increase rate of 300 • C/h, with an isotherm plateau of 30 min, at 1000 • C.
Analysis Methods
The mass loss and the phase transformations occurring during the heating of the dried gel were investigated by thermal analyses (TG-DTA) employing a SDTQ600 TA Instruments (TA Instruments New Castle, DE, USA) thermal analyzer.
The thermal analysis was performed on a sample of about 10 mg placed in an alumina crucible and non-isothermally heated from 30 • C to 1000 • C at a heating rate of 10 • C/min in dynamic flow-air.
The structural characterization was carried out at room temperature by powder X-ray diffraction using a Bruker D8 Advance AXS diffractometer (Karlsruhe, Germany) with Cu Kα radiation in the 20-80 • 2θ region. Crystallite size, cell arrangements and phase fractions were calculated by Rietveld refinement analysis using FullProf Suit Software (FullProff suite July-2017).
The FT-IR absorption spectra were recorded with a JASCO FTIR 6200 spectrometer in the 400-1500 cm −1 spectral range, with a standard resolution of ±2 cm −1 .
Scanning Electron Microscopy (SEM, Chiyoda, Tokyo, Japan) and Transmission Electron Microscopy (TEM, Chiyoda, Tokyo, Japan) were performed employing a combined electron scanning (SE, Chiyoda, Tokyo, Japan) and transmission (TE) Hitachi HD-2700 electron microscope (Chiyoda, Tokyo, Japan) operated at a maximum acceleration voltage of 200 kV. The energy-dispersive X-ray spectrometry (EDX) was used to obtain the images with the nickel distribution in the glossy ceramic glaze.
The absorption spectra of the powder samples were obtained using a Perkin-Elmer Lambda 45 UV/Vis spectrometer (Waltham, MA, USA) with integrating sphere using the pellet technique. The powder samples mixed with BaSO 4 were uniaxial press in a pellet matrix using a load force of 10 tons/cm 2 to form transparent disks with diameter of 13 mm, and 2 mm thick. The spectra were recorded in the 200-850 nm wavelength range, with the wavelength accuracy ±2 nm.
The stability of pigments was checked by incorporating them in both matte and glossy tile glaze, using a 2% pigment addition. Thermal treatment for glaze melting was performed at 1200 • C for 6 h at maximum temperature. The glazes were characterized using a color spectrophotometer spectro-guide series (BYK Gardner, Los Angeles, CA, USA).
Thermal Analysis
The thermal decomposition of dried precursor gel was investigated to understand the details of the decomposition process, but most importantly, to determine the temperature at which nucleation and crystallization takes place. This is imperative from a technological point of view and the potential application of the studied pigments in the industry. The thermogravimetric (TG) and differential thermal analysis (DTA) thermograms (20-1000 • C) of the CoCr 2 O 4 dried gel are presented in Figure 1.
The stability of pigments was checked by incorporating them in both matte an glossy tile glaze, using a 2% pigment addition. Thermal treatment for glaze melting wa performed at 1200 °C for 6 h at maximum temperature. The glazes were characterize using a color spectrophotometer spectro-guide series (BYK Gardner, Los Angeles, CA USA)
Thermal Analysis
The thermal decomposition of dried precursor gel was investigated to understan the details of the decomposition process, but most importantly, to determine the temper ature at which nucleation and crystallization takes place. This is imperative from a tech nological point of view and the potential application of the studied pigments in the indus try. The thermogravimetric (TG) and differential thermal analysis (DTA) thermogram (20-1000 °C) of the CoCr2O4 dried gel are presented in Figure 1. The decomposition of the dried gel follows two steps: • the removal of the residual water (the absorbed and the coordinated water) with mass loss of about 5.09% (30-61 °C), • the weight loss of the organic fragment with formation of volatile compounds, with total mass loss of approximately 9.13% (61-1000 °C).
A significant mass loss takes place up to 400 °C, after which no significant losses ar recorded. Overall, the total mass loss at 1000 °C was 15.02% and is assigned to the dryin process of the gel, in which much of the organic precursor has been removed. Prior to 90 °C, a small mass loss of 1.03% is attributed to the volatile compounds formed and re mained inside the material pores. The DTA curve presents a big exothermic peak whic unfolds in the 20-800 °C range confirm the two listed decomposition steps. The peak cor responding to the elimination of the organic fragments can be described by the followin temperatures: Tonset = 318.35 °C, Tpeak = 331.02 °C and Tend = 352.76 °C.
The dried gels were subjected to thermal treatment at 1000 °C for 30 min to ensur the compound crystallization. The decomposition of the dried gel follows two steps: • the removal of the residual water (the absorbed and the coordinated water) with a mass loss of about 5.09% (30-61 • C), • the weight loss of the organic fragment with formation of volatile compounds, with a total mass loss of approximately 9.13% (61-1000 • C).
A significant mass loss takes place up to 400 • C, after which no significant losses are recorded. Overall, the total mass loss at 1000 • C was 15.02% and is assigned to the drying process of the gel, in which much of the organic precursor has been removed. Prior to 900 • C, a small mass loss of 1.03% is attributed to the volatile compounds formed and remained inside the material pores. The DTA curve presents a big exothermic peak which unfolds in the 20-800 • C range confirm the two listed decomposition steps. The peak corresponding to the elimination of the organic fragments can be described by the following temperatures: T onset = 318.35 • C, T peak = 331.02 • C and T end = 352.76 • C.
The dried gels were subjected to thermal treatment at 1000 • C for 30 min to ensure the compound crystallization. The XRD patterns present only intense and sharp peaks characteristic to the crystalline phases. Figure 2 presents the XRD patterns of the Ni 2+ ions substitution in CoCr2O4 spinel matrix after the thermal treatment in air atmosphere at 1000 °C for 30 min. Both NiCr2O4 and CoCr2O4 oxides crystallize into normal spinel structures in the AB2O4 form with Co 2+ and Ni 2+ ions occupying the tetrahedral A sites and Cr 3+ ions occupying octahedral B sites.
Structural Characterization of Spinels
The XRD patterns present only intense and sharp peaks characteristic to the crystalline phases.
The Co(1−x)NixCr2O4 spinel (when x ≠ 1) crystallizes in Face-Centered Cubic structure and is indexable to the space group Fd-3m (no. 227). This is consistent with the standard values of Face-Centered Cubic phase (00-022-1084 PDF files). On the other hand, NiCr2O4 crystalizes in the Body-Centered Tetragonal structure, the space group I41/amd (no. 141). So, a transition from the cubic spinel (when x < 1) to a Body Centered Tetragonal structure (when x = 1) is observed.
For samples with x < 1 the XRD data analysis indicated the formation of two crystalline phases: a well crystallized phase Co(1−x)NixCr2O4 spinel with similar structure to CoCr2O4 (00-022-1084 PDF files) as the major phase, and Cr2O3 (00-038-1479 PDF files) as a secondary phase. Crystallinity dependent properties of crystals are attributed to their crystallite size. Larger crystallites develop sharper peaks on the XRD pattern for each crystal plane. The width of a peak is related to its crystallite size [21]. The formation of Cr2O3 impurity phase is common during the synthesis process of NiCr2O4 and is often reported in the literature [22,23].
The Rietveld refinement fittings were carried out to investigate the changes in the crystal structure by using FullProf software. The experimental recorded data points are represented with black circles, the structural model fit is represented with a red solid line, the structural model fit is represented by the blue solid line and the vertical green bars represent the Bragg positions. Up until the best possible convergence, the refinement was continued. The good fit is evidenced in Figure 3 where the small variance in the difference curve can be observed. The Co (1−x) Ni x Cr 2 O 4 spinel (when x = 1) crystallizes in Face-Centered Cubic structure and is indexable to the space group Fd-3m (no. 227). This is consistent with the standard values of Face-Centered Cubic phase (00-022-1084 PDF files). On the other hand, NiCr 2 O 4 crystalizes in the Body-Centered Tetragonal structure, the space group I41/amd (no. 141). So, a transition from the cubic spinel (when x < 1) to a Body Centered Tetragonal structure (when x = 1) is observed.
For samples with x < 1 the XRD data analysis indicated the formation of two crystalline phases: a well crystallized phase Co (1−x) Ni x Cr 2 O 4 spinel with similar structure to CoCr 2 O 4 (00-022-1084 PDF files) as the major phase, and Cr 2 O 3 (00-038-1479 PDF files) as a secondary phase. Crystallinity dependent properties of crystals are attributed to their crystallite size. Larger crystallites develop sharper peaks on the XRD pattern for each crystal plane. The width of a peak is related to its crystallite size [21]. The formation of Cr 2 O 3 impurity phase is common during the synthesis process of NiCr 2 O 4 and is often reported in the literature [22,23].
The Rietveld refinement fittings were carried out to investigate the changes in the crystal structure by using FullProf software. The experimental recorded data points are represented with black circles, the structural model fit is represented with a red solid line, the structural model fit is represented by the blue solid line and the vertical green bars represent the Bragg positions. Up until the best possible convergence, the refinement was continued. The good fit is evidenced in Figure 3 where the small variance in the difference curve can be observed.
The average crystallite sizes, the lattice parameters and the average strain and standard deviation calculated with the FullProf software are listed in Tables 1 and 2 The average crystallite sizes, the lattice parameters and the average strain and standard deviation calculated with the FullProf software are listed in Tables 1 and 2. Thereby, the average crystallite sizes increase with the increase in concentration of Ni 2+ ions in Co(1−x)NixCr2O4 spinel starting from 39.9 nm when x = 0 and reaching 99.42 nm when x = 0.75. The average crystallite size of NiCr2O4 (when x = 1) was 58.98 nm.
The evolution of the average crystallite size, cell parameter, density, and unit cell volume as function of nickel concentration are represented in Figure 4. Thereby, the lattice constant value of Co(1−x)NixCr2O4 spinel when x ≤ 0.75 structure was measured to be 8.33 Å, and it is in concordance with the literature data [24]. This large value of the lattice The evolution of the average crystallite size, cell parameter, density, and unit cell volume as function of nickel concentration are represented in Figure 4. Thereby, the lattice constant value of Co (1−x) Ni x Cr 2 O 4 spinel when x ≤ 0.75 structure was measured to be 8.33 Å, and it is in concordance with the literature data [24]. This large value of the lattice constant was assigned to the disordering of cations in the spinel structure of CoCr 2 O 4 and due to the exchange of tetrahedral A-site Co 2+ ions with octahedral B-site Cr 3+ ions [2]. constant was assigned to the disordering of cations in the spinel structure of CoCr2O4 and due to the exchange of tetrahedral A-site Co 2+ ions with octahedral B-site Cr 3+ ions [2].
When the Ni 2+ content is increased, the lattice parameter and volume unit decrease (Figure 4a,b). This decrease is assigned to the difference of ionic radius of Ni 2+ and Co 2+ , i.e., with nickel having a smaller radius. Similarly, the density increases with the increase of nickel concentration (Figure 4d).
Considering the targeted application as pigments, the crystallite size plays a determining role in the coloration capacity as being conditioned/size dependent on the specific surface. When the Ni 2+ content is increased, the lattice parameter and volume unit decrease (Figure 4a,b). This decrease is assigned to the difference of ionic radius of Ni 2+ and Co 2+ , i.e., with nickel having a smaller radius. Similarly, the density increases with the increase of nickel concentration (Figure 4d).
Considering the targeted application as pigments, the crystallite size plays a determining role in the coloration capacity as being conditioned/size dependent on the specific surface.
FT-IR Spectroscopy
The FT-IR spectra of Co (1−x) Ni x Cr 2 O 4 (x = 0, 0.25, 0.5, 0.75 and 1) samples are presented in Figure 5. Two strong absorption bands centered at 516, 620 cm −1 with a shoulder at 666 cm −1 are observed. These absorption bands are the spinel characteristic peaks and depend on the vibration of the cations at the B site [25][26][27].
FT-IR Spectroscopy
The FT-IR spectra of Co(1−x)NixCr2O4 (x = 0, 0.25, 0.5, 0.75 and 1) samples are presented in Figure 5. Two strong absorption bands centered at 516, 620 cm −1 with a shoulder at 666 cm −1 are observed. These absorption bands are the spinel characteristic peaks and depend on the vibration of the cations at the B site [25][26][27].
As can be observed in FTIR spectra, the peak centered at 525 cm −1 from the sample with x = 0 shifts to a lower frequency, at 504 cm −1 , with the increasing nickel content x = 1 (in NiCr2O4). The crystal theory of the field stabilization energy advises that the Cr 3+ and Co 2+ are incorporated into the octahedral and tetrahedral interstices site and compared to the Cr 3+ and Ni 2+ , which by occupying the octahedral interstices site own a smaller ligand gravity and splitting energy of d orbital due to the smaller amount of electric charge will cause the peak to shift to lower frequencies [25,26,28]. The second peak at 620 cm −1 also shifts towards lower frequencies with the increase of nickel content and has the same origin as the peak from 516 cm −1 [25]. Minor changes in the observed band shape can be assigned to the particle size, a similar behavior being observed in CoCr2O4 samples presented in the literature [29].
Morphological Characteristics
SEM and TEM micrographs of Co(1−x)NixCr2O4 samples, with x = 0, 0.25, 0.5, 0.75 and 1, are presented in Figure 6. All samples have the tendency to agglomerate their small particles generating irregular and larger aggregates with various rhombohedral-like shapes. Additionally, no significant differences in the spinel microstructure with the increase of Ni 2+ ion content was observed, suggesting a similar morphology for all compositions. The estimated crystallite sizes, according to TEM analysis, confirm the calculated values from XRD. As can be observed in FTIR spectra, the peak centered at 525 cm −1 from the sample with x = 0 shifts to a lower frequency, at 504 cm −1 , with the increasing nickel content x = 1 (in NiCr 2 O 4 ). The crystal theory of the field stabilization energy advises that the Cr 3+ and Co 2+ are incorporated into the octahedral and tetrahedral interstices site and compared to the Cr 3+ and Ni 2+ , which by occupying the octahedral interstices site own a smaller ligand gravity and splitting energy of d orbital due to the smaller amount of electric charge will cause the peak to shift to lower frequencies [25,26,28]. The second peak at 620 cm −1 also shifts towards lower frequencies with the increase of nickel content and has the same origin as the peak from 516 cm −1 [25]. Minor changes in the observed band shape can be assigned to the particle size, a similar behavior being observed in CoCr 2 O 4 samples presented in the literature [29].
Morphological Characteristics
SEM and TEM micrographs of Co (1−x) Ni x Cr 2 O 4 samples, with x = 0, 0.25, 0.5, 0.75 and 1, are presented in Figure 6. All samples have the tendency to agglomerate their small particles generating irregular and larger aggregates with various rhombohedral-like shapes. Additionally, no significant differences in the spinel microstructure with the increase of Ni 2+ ion content was observed, suggesting a similar morphology for all compositions. The estimated crystallite sizes, according to TEM analysis, confirm the calculated values from XRD.
UV-Vis Spectroscopy
The Ni 2+ substitution of Co 2+ in CoCr 2 O 4 matrix was investigated by UV-VIS spectroscopy. The optical absorption spectra of the Co (1−x) Ni x Cr 2 O 4 (x = 0, 0.25, 0.50, 0.75 and 1) recorded in the wavelength range from 250-850 nm are presented in Figure 7. The recorded spectra of the synthesized nanoparticles are very similar to the spectra of CoCr 2 O 4 [29] and, also of NiCr 2 O 4 [29] as reported in the literature. The band around 259 nm can be attributed to the charge-transfer transitions between O 2− and Cr 3+ [17,29]. and to a decrease in intensity of the bands in the 574-700 nm wavelength range. Additionally, the absorption bands of Co(1−x)NixCr2O4 samples centered at 574 nm, 612 nm, and 660 nm shifts with 10 nm to higher wavelengths and the band centered at 760 nm shifts to lower wavelengths with the increase of Ni 2+ content.
The shoulder present in all the spectra at around 460 nm is attributed to the intrinsic d-d transition of Cr 3+ ions [31][32][33]. A decrease in the intensity of the absorption bands at wavelengths higher than 500 nm was observed with the increase of Ni 2+ ions in the matrix.
The absorption bands at about 574 nm, 612 nm are frequently observed in cobalt chromite oxides and can be attributed to the spin-allowed electronic transition 4 A2(F)→ 4 T1(P) of Co +2 ions in A site. The absorption band from 612 nm it is also assigned to the transition 4 A2g→ 4 T2g of Cr 3+ ion at the B site [17,25]. The band from 660 nm and 750 nm corresponds to 4 A2→ 4 T2 and 3 A2→ 3 T1 transitions and are due to the d-d transition of Ni 2+ ions in the tetrahedral coordinated O 2-environment [20,32]. The bands from about 750 nm and 820 nm become more visible with the increase of Ni 2+ ions in the matrix and confirm the incorporation of Ni 2+ in tetrahedral coordination [34].
CIE Diagram of the Obtained Pigments
The UV-Vis data were used to determinate the CIR color coordinates, (x, y) of the pigment samples. The obtained color parameters are listed in Table 3 and represented in the CIE diagram illustrated in Figure 8. As can observed, the color coordinates slightly vary from (0.3865, 0.3295) when x = 0.25 to (0.3118, 0.2762) when x = 1 and ranging in between for x = 0.5 and 0.75. Increasing the nickel concentration in the matrix leads to a shift from a pink-turquoise to a greenish-blue color. This observation is correlated with the literature data [10] and the UV-Vis absorption spectra described previously. The incorporation of nickel ions in the CoCr 2 O 4 matrix leads to the broadening of the band centered around 350 nm, band assigned to a charge transfer of Ni 2+ cation [25,30] and to a decrease in intensity of the bands in the 574-700 nm wavelength range. Additionally, the absorption bands of Co (1−x) Ni x Cr 2 O 4 samples centered at 574 nm, 612 nm, and 660 nm shifts with 10 nm to higher wavelengths and the band centered at 760 nm shifts to lower wavelengths with the increase of Ni 2+ content.
The shoulder present in all the spectra at around 460 nm is attributed to the intrinsic d-d transition of Cr 3+ ions [31][32][33]. A decrease in the intensity of the absorption bands at wavelengths higher than 500 nm was observed with the increase of Ni 2+ ions in the matrix.
The absorption bands at about 574 nm, 612 nm are frequently observed in cobalt chromite oxides and can be attributed to the spin-allowed electronic transition 4 A 2 (F)→ 4 T 1 (P) of Co +2 ions in A site. The absorption band from 612 nm it is also assigned to the transition 4 A 2g → 4 T 2g of Cr 3+ ion at the B site [17,25]. The band from 660 nm and 750 nm corresponds to 4 A 2 → 4 T 2 and 3 A 2 → 3 T 1 transitions and are due to the d-d transition of Ni 2+ ions in the tetrahedral coordinated O 2− environment [20,32]. The bands from about 750 nm and 820 nm become more visible with the increase of Ni 2+ ions in the matrix and confirm the incorporation of Ni 2+ in tetrahedral coordination [34].
CIE Diagram of the Obtained Pigments
The UV-Vis data were used to determinate the CIR color coordinates, (x, y) of the pigment samples. The obtained color parameters are listed in Table 3 and represented in the CIE diagram illustrated in Figure 8. As can observed, the color coordinates slightly vary from (0.3865, 0.3295) when x = 0.25 to (0.3118, 0.2762) when x = 1 and ranging in between for x = 0.5 and 0.75. Increasing the nickel concentration in the matrix leads to a shift from a pink-turquoise to a greenish-blue color. This observation is correlated with the literature data [10] and the UV-Vis absorption spectra described previously. The pigments were embedded in both matte and glossy tile glazes to test their applications in the ceramic tile industry. Thereby, the homogeneous glasses were prepared by incorporating spinel nanopowders into ceramic glazes which were further applied on ceramic substrates and were finally subjected to thermal treatments. Figure 9 presents a more comprehensive view of the pigments before and after their embedding. As can be observed, the pigment powders exhibit bluish tones that tend to become greener with the progression of Ni 2+ substitution up to a yellowish green with the full Co 2+ replacement.
The color of pigment powders and those embedded in glazes, in terms of color parameters (L* = lightness, a* = red-green axe values, b* = yellow-blue axe values, G* = gloss) are presented in Table 4. A major difference that can be observed is that of the 'L' parameter, that ranges between 31.09 and 33.49 for the matte glaze and from 40.69 to 45.72 for the glossy one. The gloss parameter ('G') is lower for matte glazes (50.5-58.1) and higher for glossy ones (83. 8-86.8), due to the nature of the glaze itself. This can be explained by the fact that the latter reflects light due to its smoother surface, whereas the former tends to scatter it. Therefore, pigments embedded in glossy glazes will naturally become 'brighter' in nature.
The a* and b* coordinates of the glazes increase with increasing nickel substitution in pigment which impacts the color by his chromophore capacity and also by the impact on particle size. Generally, this ranging can also be correlated with different synthesis The pigments were embedded in both matte and glossy tile glazes to test their applications in the ceramic tile industry. Thereby, the homogeneous glasses were prepared by incorporating spinel nanopowders into ceramic glazes which were further applied on ceramic substrates and were finally subjected to thermal treatments. Figure 9 presents a more comprehensive view of the pigments before and after their embedding. As can be observed, the pigment powders exhibit bluish tones that tend to become greener with the progression of Ni 2+ substitution up to a yellowish green with the full Co 2+ replacement.
The color of pigment powders and those embedded in glazes, in terms of color parameters (L* = lightness, a* = red-green axe values, b* = yellow-blue axe values, G* = gloss) are presented in Table 4. A major difference that can be observed is that of the 'L' parameter, that ranges between 31.09 and 33.49 for the matte glaze and from 40.69 to 45.72 for the glossy one. The gloss parameter ('G') is lower for matte glazes (50.5-58.1) and higher for glossy ones (83. 8-86.8), due to the nature of the glaze itself. This can be explained by the fact that the latter reflects light due to its smoother surface, whereas the former tends to scatter it. Therefore, pigments embedded in glossy glazes will naturally become 'brighter' in nature. sion in Figure 9. Once embedded in the glazes, the color of the pigment becomes darker and more intense due to thermal treatment.
The experimental results indicate that the synthesized Co(1−x)NixCr2O4 spinel (0.25 ≤ x ≤ 1) oxide nanopowders can be successfully used as ceramic pigments due to their coloration capacity and thermal resistance. The thermal and chemical stability of the Co(1−x)NixCr2O4 spinel pigment in the glazes is high enough to obtain a uniform color distribution of the product and is not affected by melting in the firing process.
EDX Mapping
The nickel substitution in the spinel was evidenced after embedding in glossy ceramic tile by using the Energy-Dispersive X-ray Spectrometry (EDX) from SEM. Figure 10 presents the EDS element layered image including Ni Kα1 of the ceramic glossy tile crosssection as a function of nickel substitution in the spinel. A uniform distribution of the nickel in the cross-section of the glaze is observed for all samples and the partial and total substitution of nickel is evidenced in Figure 10. The a* and b* coordinates of the glazes increase with increasing nickel substitution in pigment which impacts the color by his chromophore capacity and also by the impact on particle size. Generally, this ranging can also be correlated with different synthesis methods, with different factors such as synthesis conditions (temperature, stoichiometry, and pH) or particle size. For example, the growth in b* value coordinate was correlated with the increase in particle size in CoAl 2 O 4 for blue pigment [1]. In glazes, b* values are highly reduced to a range of −6.7 and −0.4, also with greener hues [1,35].
An increase in a* (red-green) and b* (yellow-blue) values can also be observed with an increase in Ni substitution. Higher a* values tend to lead to yellowish-brown hues and higher b* values to reddish-brown ones. This can be better observed by the color progression in Figure 9. Once embedded in the glazes, the color of the pigment becomes darker and more intense due to thermal treatment.
The experimental results indicate that the synthesized Co (1−x) Ni x Cr 2 O 4 spinel (0.25 ≤ x ≤ 1) oxide nanopowders can be successfully used as ceramic pigments due to their coloration capacity and thermal resistance. The thermal and chemical stability of the Co (1−x) Ni x Cr 2 O 4 spinel pigment in the glazes is high enough to obtain a uniform color distribution of the product and is not affected by melting in the firing process.
EDX Mapping
The nickel substitution in the spinel was evidenced after embedding in glossy ceramic tile by using the Energy-Dispersive X-ray Spectrometry (EDX) from SEM. Figure 10 presents the EDS element layered image including Ni Kα1 of the ceramic glossy tile cross-section as a function of nickel substitution in the spinel. A uniform distribution of the nickel in the cross-section of the glaze is observed for all samples and the partial and total substitution of nickel is evidenced in Figure 10.
Conclusions
The paper reports structural, morphological, and optical properties of new ceramic pigments-based spinel structure, obtained by sol-gel route. The structural characterization is consistent with the substitution of chromophores Co 2+ ions with the Ni 2+ ions in the CoCr 2 O 4 matrix. The XRD analysis revealed the formation of Co (1−x) Ni x Cr 2 O 4 spinel-type Face-Centered Cubic structure as a principal phase and Cr 2 O 3 as a secondary one when x ≤ 0.75. A decrease in unit cell parameter and the unit cell volume was achieved with an increase of x. When x = 1, NiCr 2 O 4 crystallizes in the Body-Centered Tetragonal spinel as a single phase when compared to lower 'x' values when a secondary phase appears. The increase of nickel substitution in the matrix has a pronounced increase in the size of the crystallites from 39.9 nm (for x = 0) to 99.42 nm (for x = 0.75). FT-IR spectra confirmed the spinel structure formation and the elimination of all the organic fragments. The UV-Vis absorption spectra presented the bands corresponding nickel ions located at the A site and the chromium ions located at the B sites. Adjusting the nickel content in the CoCr 2 O 4 matrix the color of the pigment can be easily controlled, as it can also be seen in the CIE diagram chromaticity. SEM and TEM microscopy confirmed evidence of the powder morphology and the tendency of nanoparticles to agglomerate. The CIELab coordinates of the pigments embedded in glossy and matte tile glazes reveal the color ranging of the two glazes. Additionally, the elemental EDX distribution of Ni Kα1 confirms the homogeneous and uniform distribution of the pigment in the glossy glaze after firing. The obtained pigments can be successfully applied in glaze tiles and ceramics. | 7,723.8 | 2022-12-01T00:00:00.000 | [
"Materials Science"
] |
Production of Astaxanthin Using CBFD1/HFBD1 from Adonis aestivalis and the Isopentenol Utilization Pathway in Escherichia coli
Astaxanthin is a powerful antioxidant and is used extensively as an animal feed additive and nutraceutical product. Here, we report the use of the β-carotene hydroxylase (CBFD1) and the β-carotene ketolase (HBFD1) from Adonis aestivalis, a flowering plant, to produce astaxanthin in E. coli equipped with the P. agglomerans β-carotene pathway and an over-expressed 4-methylerythritol-phosphate (MEP) pathway or the isopentenol utilization pathway (IUP). Introduction of the over-expressed MEP pathway and the IUP resulted in a 3.2-fold higher carotenoid content in LB media at 36 h post-induction compared to the strain containing only the endogenous MEP. However, in M9 minimal media, the IUP pathway dramatically outperformed the over-expressed MEP pathway with an 11-fold increase in total carotenoids produced. The final construct split the large operon into two smaller operons, both with a T7 promoter. This resulted in slightly lower productivity (70.0 ± 8.1 µg/g·h vs. 53.5 ± 3.8 µg/g·h) compared to the original constructs but resulted in the highest proportion of astaxanthin in the extracted carotenoids (73.5 ± 0.2%).
Introduction
Astaxanthin is a red carotenoid and a highly valuable antioxidant used in the pharmaceutical, cosmetics, nutraceutical, and aquaculture industries [1,2].It is also used as a nutraceutical for preventing diseases caused by oxidative stresses, such as cataracts disease, various cancers, Parkinson's disease and Alzheimer's disease [3].It has significant applications in fish farming, where astaxanthin is included in the feed of salmon, trout, and shrimp to brighten the colour of their meat [1].Astaxanthin is naturally synthesized by several species of algae and fungi [4][5][6].The majority of commercial astaxanthin production is by chemical synthesis.Unfortunately, the resulting product is a mix of stereoisomers [5], and there is substantial consumer desire for biologically produced astaxanthin containing the single isomer produced by biosynthesis.
Most industrial biological production of astaxanthin uses the microalgae species Haematococcus pluvialis in outdoor photobioreactors, which can accumulate up to 50 mg/g of astaxanthin, the highest reported specific yield for this compound [7,8].However, the life cycle of this algae species is slow and complex, and astaxanthin production must be induced by using some sort of stressor, typically high-intensity light that is difficult to scale up [9,10].The production of astaxanthin in heterologous hosts has also been studied extensively for the last 20 years, with the industrial workhorses Escherichia coli, Saccharomyces cerevisiae and Yarrowia lipolytica being the most popular species for heterologous carotenoid production [11].Astaxanthin produced in E. coli is readily extracted due to its simple cell walls, and as a bacterium, its cultivation is straightforward [12], making astaxanthin production in E. coli an attractive host.While specific yields are typically 10-fold lower than those achieved with H. pluvialis, E. coli can be more readily cultivated into much higher cell densities in a much shorter period than H. pluvialis.
E. coli possesses the biosynthetic pathway needed to produce up to sesquiterpenoids from farnesyl pyrophosphate (FPP) using its native methylerythritol-4-phosphate (MEP) pathway (Figure 1) [13].To produce the first coloured carotenoid, lycopene, the genes crtE, crtI, and crtB, typically sourced from Pantoea agglomerans or anantis, must be expressed [11].In order to achieve higher carotenoid yields in E. coli, many studies co-express the heterologous mevalonate pathway (MVA) from eukaryotes [14], and recently, the artificial isoprenoid biosynthesis pathway called the isopentenol utilization pathway (IUP) has been used to achieve very high lycopene yields [15].
Bioengineering 2023, 10, x FOR PEER REVIEW 2 of 19 extracted due to its simple cell walls, and as a bacterium, its cultivation is straightforward [12], making astaxanthin production in E. coli an attractive host.While specific yields are typically 10-fold lower than those achieved with H. pluvialis, E. coli can be more readily cultivated into much higher cell densities in a much shorter period than H. pluvialis.E. coli possesses the biosynthetic pathway needed to produce up to sesquiterpenoids from farnesyl pyrophosphate (FPP) using its native methylerythritol-4-phosphate (MEP) pathway (Figure 1) [13].To produce the first coloured carotenoid, lycopene, the genes crtE, crtI, and crtB, typically sourced from Pantoea agglomerans or anantis, must be expressed [11].In order to achieve higher carotenoid yields in E. coli, many studies coexpress the heterologous mevalonate pathway (MVA) from eukaryotes [14], and recently, the artificial isoprenoid biosynthesis pathway called the isopentenol utilization pathway (IUP) has been used to achieve very high lycopene yields [15].Carotenoid production is highly conserved amongst diverse species until lycopene.Enzymes used to produce β-carotene, the precursor to xanthophylls, and the remaining steps to form astaxanthin can vary significantly between different phyla (Figure 1).However, today, the majority of studies looking to produce astaxanthin in E. coli have expressed the bacterial crtY from Pantoea sp. or lycB, a plant lycopene β-cyclase to produce β-carotene [11].In order to produce astaxanthin, β-carotene then needs to be oxidized by both ketolases and hydroxylases.Natural bacterial producers use CrtZ and CrtW, microalgae use CrtO/BKT and CrtR/Chyb, some fungi use a bifunctional CrtS (paired with cytochrome P450 reductase CrtR), and HBDF and CBFD are used in flower plants [16].The Carotenoid production is highly conserved amongst diverse species until lycopene.Enzymes used to produce β-carotene, the precursor to xanthophylls, and the remaining steps to form astaxanthin can vary significantly between different phyla (Figure 1).However, today, the majority of studies looking to produce astaxanthin in E. coli have expressed the bacterial crtY from Pantoea sp. or lycB, a plant lycopene β-cyclase to produce β-carotene [11].In order to produce astaxanthin, β-carotene then needs to be oxidized by both ketolases and hydroxylases.Natural bacterial producers use CrtZ and CrtW, microalgae use CrtO/BKT and CrtR/Chyb, some fungi use a bifunctional CrtS (paired with cytochrome P 450 reductase CrtR), and HBDF and CBFD are used in flower plants [16].The biosynthesis routes of each pair of enzymes produce different intermediate species.While a multitude of bacterial ketolases and hydroxylases have been used in metabolic engineering of astaxanthin in E. coli, there are no studies to date exploring the use of carotenoid-β-ring 4-dehydrogenase (CBFD1) and carotenoid-4-hydroxy-β-ring 4-dehydrogense (HBFD1), sourced from the flowering plant Adonis aestivalis with an over-expressed isoprenoid pathway [16].Therefore, the major goal of this work was to evaluate the productivity of E. coli strains with over-expressed MEP and IUP pathways when using the CBFD1/HBFD1 pathway for astaxanthin production.
In this work, we combine the astaxanthin biosynthesis pathway of A. aestivalis and the carotenoid pathway of P. agglomerans with either an upregulated endogenous MEP or the artificial IUP biosynthesis pathway for increased IPP/DMAPP production in E. coli to produce astaxanthin.The relative portion of astaxanthin and other carotenoid intermediates was determined by HPLC and was highly dependent on the construction of the plasmids used.Total productivity was highly dependent on the cultivation media for different upstream pathways for the overproduction of IPP and DMAPP precursors.
Strains, Plasmids and Genes
E. coli K12, MG1655 (DE3) was used as the host for all the astaxanthin expression studies in this work.MG1655(DE3)-trcMEP was gifted from the Stephanopoulos lab (MIT, MA, USA) and has four MEP genes under the control of a lac inducible trc promoter inserted into the chromosome near the arabinose operon [17].NEB-5α was used for routine cloning purposes.The genotypes of these strains and plasmids are available in Table 1.Origin of genes and their accession numbers are listed in Table A1.The genes from the astaxanthin production pathway were amplified according to the protocol given by the NEB Phusion PCR kit and extracted from a 1% agarose gel.The genes for ggpps, crtB, crtI and idi were sourced from p5T7-lycipi-ggpps [15], which was used as a backbone for the synthesis of p5T7-Astaipi.The crtY gene was sourced from pAC-BETAipi, and cbfd and hbfd were sourced from pCBFD1, both of which were purchased from Addgene (Watertown, MA, USA) (plasmid #53277 and #53364).To over-express ispA, the gene was added to p5T7-lycipi-ggpps from p5T7-Ispa-ads to create p5T7-lycipi-ispA.Using two steps, a T7 promoter, terminator, and lac operator (lacIQ) were added to pAC-BETAipi to make pACT7-CBFD1, and then pAC-ASTA was created from this plasmid to house the rest of the genes in the pathway (cbfd1, crtY, hbfd1) under a single T7 promoter.A summary of each construct is shown in Figure 2. All genes expressed in operons have their own ribosome binding site (RBS) except for pAC-BETAipi and pCBFD1, which were obtained from Addgene and used as is.All plasmid sequences are available by request.
A list of primers used in this work can be found in Table A2.The fragments were ligated using NEB Hi-Fi assembly master mix and transformed into chemically competent NEB-5α cells using heat shock.Colony PCR was performed using Taq DNA polymerase (New England Biolabs, MA, USA) and standard buffer to identify positive transformants, and the plasmid was isolated and sequenced to confirm the correct assembly.The plasmids were electroporated into electrocompetent cells in cuvettes with a 1 mm gap (1.8 kV, 25 µF capacitance) and grown on LB plates with the appropriate antibiotics to make the strains listed in Table 1.A list of primers used in this work can be found in Table A2.The fragments were ligated using NEB Hi-Fi assembly master mix and transformed into chemically competent NEB-5α cells using heat shock.Colony PCR was performed using Taq DNA polymerase (New England Biolabs, MA, USA) and standard buffer to identify positive transformants, and the plasmid was isolated and sequenced to confirm the correct assembly.The plasmids were electroporated into electrocompetent cells in cuvettes with a 1 mm gap (1.8 kV, 25 µF capacitance) and grown on LB plates with the appropriate antibiotics to make the strains listed in Table 1.Plasmids that form carotenoid intermediates can be transformed together to complete the pathway.
Cultivation Conditions
All media were prepared according to the descriptions below and autoclaved or filtersterilized prior to use.Antibiotics and inducer stocks were made at 1000× concentration, filtered and stored at −20 • C. Final concentrations of antibiotics were Kn (50 µg/mL), Ap (50 µg/mL), and Sp (50 µg/mL).Strains were cultivated in either LB media (10 g/L tryptone, 5 g/L yeast extract, 10 g/L NaCl) or M9 media containing 3.2 g/L glucose, 5 g/L KH 2 PO 4 , 1 g/L NH 4 Cl, 0.5 g/L NaCl, 6.78 g/L Na 2 HPO 4 , 100 µM CaCl 2 , 2 mM MgSO 4 , and 10 mL/L trace elements based on the formulation provided by Wolfe [19].Strains were stored at −80 • C in glycerol stocks and revived on LB agar plates (1.5% agar), which were grown overnight at 37 • C. A single colony was inoculated into LB or M9 media and grown overnight to prepare an inoculum.For carotenoid production, strains were inoculated with 1% (v/v) of overnight culture and cultivated in triplicate in 50 mL of M9 media at 30 • C with shaking at 200 rpm.At an OD 600 of 0.5, carotenoid production was induced using a final concentration of 25 mM isoprenol (IUP strains), 1 g/L arabinose (P BAD strains), and 0.1 mM IPTG (P T7 strains) unless otherwise indicated.A list of the strains used in this study can be found in Table 1.
Carotenoid Extraction and UV/Vis Spectroscopy
For carotenoid quantification, two methods were used, total carotenoid determination using spectrophotometry or liquid chromatography combined with a diode array detector.In order to determine the carotenoid content, two 1 mL samples were taken from each flask at the indicated time after induction.Samples were stored in amber microtubes to prevent photodegradation.The cell pellet was collected by 12,000× g for 1 min.One pellet was lyophilized and weighed to obtain the cell dry weight.The other was extracted with 1 mL of 1:1 (v:v) ethanol-acetone solution.The samples were vortexed to mix and were incubated in the dark for 1 h at room temperature.The samples were centrifuged again at 12,000× g for 1 min, and 200 µL of the liquid phase was transferred to a 96-well plate, and absorbance was measured using a BioTek Synergy 4 (Agilent, CA, USA) plate reader at 475 nm.Astaxanthin was purchased from Santa Cruz Biotechnologies (Dallas, TX, USA) and used to create a standard curve and was used as a proxy for total carotenoids.Total carotenoids were calculated using the following equation: Total Carotenoids (µg/g) = Abs − blank 0.0799 mL/µg ÷ dry cell weight (g/mL)
Carotenoid Characterization
Carotenoids were extracted as described above and analyzed using high-performance liquid chromatography (1260 Infinity II, Agilent, CA, USA) equipped with a C30 column (YMC Carotenoid column, 250 mm, 5 µm pore size).Mobile phase A consisted of 15:81:4 Methyl tert-Butyl Ether (MTBE):methanol:water by volume, and mobile phase B consisted of 81:15:4MTBE: methanol:water by volume.Using a flow rate of 1.0 mL/min at 20 • C, a linear elution gradient from 100% A to 100% B over 15 min was followed by 12 min of 100% B before returning to mobile phase A over 3 min.HPLC standards (astaxanthin, lycopene, β-carotene, zeaxanthin, and canthaxanthin) were purchased from Santa Cruz Biotechnology for identification of carotenoid retention times.Zeaxanthin was used to identify isozeaxanthin as this compound cannot be purchased, and these isomers are known to co-elute using C18 chromatography [20].
Results
To compare the effects of different upstream pathways on the production of astaxanthin in an existing system, pAC-BETAipi and pCBFD1 plasmids were transformed into MG1655 (DE3), MG16655 (DE3) with trcMEP operon inserted into the chromosome and co-transformed with the pSEVA228-pro4IUP plasmid resulting in strains ASTA 1, ASTA 2, and ASTA 3, respectively.Each strain was grown in LB media as well as M9 media, and the results are presented in Figure 3. and co-transformed with the pSEVA228-pro4IUP plasmid resulting in strains ASTA 1, ASTA 2, and ASTA 3, respectively.Each strain was grown in LB media as well as M9 media, and the results are presented in Figure 3.In complex media such as LB, the strains expressing an upregulated MEP pathway were the most productive for carotenoid production, resulting in a maximum carotenoid titre of 6.05 ± 0.95 mg/L at 36 h (Figure 3A).This was a 2.5-fold increase in carotenoid titre over the wild-type strain.The IUP expressing strain had a lower titre than the trcMEP strain, but both strains reached the same carotenoid content by 36 h (2.87 ± 0.58 and 2.87 ± 0.67 at 36 h, respectively).These results are explained by the higher cell density of the wild-type and trcMEP strains over the course of the cultivation.The IUP strain only reached half the cell density of the wild-type strain (1.37 ± 0.25 g/L vs. 2.67 ± 0.58 g/L, respectively).Interestingly, when grown in M9 media, a minimal glucose media, the IUP strain dramatically outperformed the trcMEP and wild-type strains, producing 11.3 ± 0.55 mg/L of total carotenoids.This was a 13-fold increase over the wild-type strain and an 11fold increase over the trcMEP strain in M9 media.This is still approximately double the titre produced by the trcMEP strain in LB media.The type of media used also had an effect on when carotenoid production ceased.In LB media, the maximum carotenoid titre and content were reached by 36 h.However, in M9 media, production of carotenoids continued until 48 h in the IUP strain but ceased by 12 h in the wild-type MEP and trcMEP strains.These differences are likely due to the depletion of nutrients in LB/M9 for wildtype and trcMEP strains, which depend on glucose or amino acids from the media for precursors through the MEP pathway.In the IUP strain, biosynthesis of carotenoids could continue because of the exogenous isoprenol added to the media that is not used for central carbon metabolism or cell maintenance energy.In order to observe the role of the downstream operon structure in different media, a new plasmid was constructed (p5T7-Astaipi) with all of the genes necessary for astaxanthin production under the control of In complex media such as LB, the strains expressing an upregulated MEP pathway were the most productive for carotenoid production, resulting in a maximum carotenoid titre of 6.05 ± 0.95 mg/L at 36 h (Figure 3A).This was a 2.5-fold increase in carotenoid titre over the wild-type strain.The IUP expressing strain had a lower titre than the tr-cMEP strain, but both strains reached the same carotenoid content by 36 h (2.87 ± 0.58 and 2.87 ± 0.67 at 36 h, respectively).These results are explained by the higher cell density of the wild-type and trcMEP strains over the course of the cultivation.The IUP strain only reached half the cell density of the wild-type strain (1.37 ± 0.25 g/L vs. 2.67 ± 0.58 g/L, respectively).Interestingly, when grown in M9 media, a minimal glucose media, the IUP strain dramatically outperformed the trcMEP and wild-type strains, producing 11.3 ± 0.55 mg/L of total carotenoids.This was a 13-fold increase over the wild-type strain and an 11-fold increase over the trcMEP strain in M9 media.This is still approximately double the titre produced by the trcMEP strain in LB media.The type of media used also had an effect on when carotenoid production ceased.In LB media, the maximum carotenoid titre and content were reached by 36 h.However, in M9 media, production of carotenoids continued until 48 h in the IUP strain but ceased by 12 h in the wild-type MEP and trcMEP strains.These differences are likely due to the depletion of nutrients in LB/M9 for wild-type and trcMEP strains, which depend on glucose or amino acids from the media for precursors through the MEP pathway.In the IUP strain, biosynthesis of carotenoids could continue because of the exogenous isoprenol added to the media that is not used for central carbon metabolism or cell maintenance energy.In order to observe the role of the downstream operon structure in different media, a new plasmid was constructed (p5T7-Astaipi) with all of the genes necessary for astaxanthin production under the control of the T7 promoter.This was also constructed to reduce the metabolic burden of the IUP strain that required three plasmids for carotenoid production.CrtE was replaced with ggpps from Taxus canadensis, which was previously reported to increase lycopene production in E. coli [15], and a copy of ispA was added to increase FPP production.The results are presented in Figure 4.A similar trend was obtained using a different downstream plasmid with the trcMEP strain, resulting in higher titres in complex media (Figure 4A) and the IUP pathway, resulting in higher titres in the minimal media (Figure 4B).When compared to the original two plasmid system, the trcMEP titre was not significantly different at 36 h in LB media (6.05 ± 0.95 vs. 5.26 ± 0.49, t-test p > 0.01, n = 3) or M9 media (1.01 ± 0.11 vs. 0.99 ± 0.06, t-test p > 0.01, n = 3).However, the IUP strain had a 3.6-fold decrease in titre with the new single plasmid system.The isoprenoid pathways and the carotenoid pathways are known for their sensitivity to protein levels, and many studies have observed that precise balancing of proteins may be needed to achieve the best titres [21,22].The operon of the p5T7-Astaipi plasmid contains seven coding sequences.Due to the length, the translation of genes near the end of the operon may be less frequent than those at the front, as placement in an operon is known to affect translational efficiency [23].The plasmid used to make p5T7-Astaipi; p5T7-lycipi-ggpps has been reported as one of the fastest producers of lycopene in the literature.Therefore, the remaining crtY, cbfd1, and hbfd1 genes were placed together in an operon controlled by the same T7 promoter, and a copy of ispA was added to p5T7-lyc-ggpps to create pAC-ASTA and p5T7-lycipi-ispA.The new plasmids were combined with the wild-type MEP, the trcMEP, and the IUP upstream pathways and grown in M9 media.The results are shown in Figure 4.
The results were expected to be similar to the previous single-operon system.The IUP strain has the same titre and carotenoid content as the previous plasmid system; they both peaked at 24 h with total carotenoid titres of 3.65 ± 0.39 mg/L (ASTA 9) and 3.42 ± 0.40 mg/L (ASTA 6), respectively (Figure 5).However, the new system with two operons performed better for the trcMEP and wild-type strains, increasing the titre 2.8-fold and 4.8-fold, respectively.HPLC analysis of the carotenoids produced in strains ASTA 3, 6, and 9 showed that astaxanthin was produced in all strains, although strains ASTA 6 and ASTA 9 produced significantly more than ASTA 3 (Figure A1).All strains contained some amount of unconverted carotenoid intermediates, with ASTA 3 producing mostly β-carotene.0.40 mg/L (ASTA 6), respectively (Figure 5).However, the new system with two operons performed better for the trcMEP and wild-type strains, increasing the titre 2.8-fold and 4.8-fold, respectively.HPLC analysis of the carotenoids produced in strains ASTA 3, 6, and 9 showed that astaxanthin was produced in all strains, although strains ASTA 6 and ASTA 9 produced significantly more than ASTA 3 (Figure A1).All strains contained some amount of unconverted carotenoid intermediates, with ASTA 3 producing mostly βcarotene.When comparing the productivity of all strains in M9 media over the 48 h cultivation period, the IUP strains outperformed the endogenous MEP and the strain with an overexpressed MEP (Figure 6A).The first strain (ASTA 3) had the highest productivity of all of the strains, but only a small portion of the products was astaxanthin (Figure 6B).The first iteration of new plasmids placed the cbfd1 under the T7 promoter instead of the arabinose promoter.This resulted in a greater portion of astaxanthin (34.6 ± 3.0% vs. 56.4± 0.8%) and a decrease in β-carotene production (40.7 ± 4.3% vs. 24.6 ± 0.5%).However, in strain ASTA 9, astaxanthin was the major product (73.5 ± 0.2%), and no canthaxanthin was detected.When comparing the productivity of all strains in M9 media over the 48 h cultivation period, the IUP strains outperformed the endogenous MEP and the strain with an overexpressed MEP (Figure 6A).The first strain (ASTA 3) had the highest productivity of all of the strains, but only a small portion of the products was astaxanthin (Figure 6B).The first iteration of new plasmids placed the cbfd1 under the T7 promoter instead of the arabinose promoter.This resulted in a greater portion of astaxanthin (34.6 ± 3.0% vs. 56.4± 0.8%) and a decrease in β-carotene production (40.7 ± 4.3% vs. 24.6 ± 0.5%).However, in strain ASTA 9, astaxanthin was the major product (73.5 ± 0.2%), and no canthaxanthin was detected.
0.40 mg/L (ASTA 6), respectively (Figure 5).However, the new system with two operons performed better for the trcMEP and wild-type strains, increasing the titre 2.8-fold and 4.8-fold, respectively.HPLC analysis of the carotenoids produced in strains ASTA 3, 6, and 9 showed that astaxanthin was produced in all strains, although strains ASTA 6 and ASTA 9 produced significantly more than ASTA 3 (Figure A1).All strains contained some amount of unconverted carotenoid intermediates, with ASTA 3 producing mostly βcarotene.When comparing the productivity of all strains in M9 media over the 48 h cultivation period, the IUP strains outperformed the endogenous MEP and the strain with an overexpressed MEP (Figure 6A).The first strain (ASTA 3) had the highest productivity of all of the strains, but only a small portion of the products was astaxanthin (Figure 6B).The first iteration of new plasmids placed the cbfd1 under the T7 promoter instead of the arabinose promoter.This resulted in a greater portion of astaxanthin (34.6 ± 3.0% vs. 56.4± 0.8%) and a decrease in β-carotene production (40.7 ± 4.3% vs. 24.6 ± 0.5%).However, in strain ASTA 9, astaxanthin was the major product (73.5 ± 0.2%), and no canthaxanthin was detected.
Discussion
The astaxanthin β-carotene hydroxylase (CHY) and ketolase enzymes from A. aestivialis used in this work (CBFD1/HBFD1) have not previously been used in metabolic engineering efforts for xanthophyll production.A survey of the literature shows that almost all studies to date have focused on the use of CrtW and CrtZ from a limited number of bacterial species (Table 2), with a small number of studies employing the BKT enzyme from C. reinhardtti, and the CHY from the microalgae H. pluvialis [24].Cunningham et al.
(2011) [16] first reported the production of astaxanthin in E. coli using CBFD1 and HBFD1.However, the production of carotenoids and relative composition were not reported for this gene combination.The operon construction had a significant impact on the overall productivity of the strain, as did the combination of upstream isoprenoid and carotenoid pathways (Figures 3-5).Interestingly, a striking difference in productivity was found for strain ASTA 3 in LB and M9 media.
The differences In carotenoid production between the ASTA 1-3 and ASTA 4-6 strains may be due to the different promoters used in each system.Minimal media supplemented with glucose activates catabolite repression, which can lead to lower transcription levels for certain promoters such as the trc promoter [25], which explains why carotenoid titre and content decreased in M9 media for the trcMEP strain (6-fold decrease in titre).However, the pro4 promoter is a synthetic promoter [26], which should not be affected by catabolite repression, but the titre was 3-fold higher in M9 than LB media.This could be a significant advantage for the IUP pathway as minimal salt-based media are inexpensive at large scale and may result in greater reproducibility.Currently, it is unknown why carotenoid production was significantly higher in the minimal media with glucose, as productivity is normally decreased in these types of media.Presumably, this is because the cell must dedicate greater resources towards de novo synthesis of nucleotides, amino acids, and vitamins that would be obtained from rich media ingredients such as yeast extract.However, there are many possible reasons for this difference, such as large changes in overall metabolic flux balance, isoprenol binding to peptides in the media through hydrogen bonding, changes in the rate of isoprenol evaporation from the media, or changes in gene expression levels in different media and to elucidate these differences will be the subject of a more extensive investigation.Fed-batch fermentation [43] * Estimated using the cell dry weight correlation of 0.33 g/L/OD 600 for E. coli.Ch() represents chromosomal integration of the listed genes.
The accumulation of intermediate carotenoids in each strain also differed depending on the structure of the carotenoid operon(s).When cbfd1 was moved from the arabinose promoter to a stronger T7 promoter and when the operon was split into two operons controlled by two separate T7 promoters, astaxanthin production increased.In strain ASTA 9, there was no accumulation of canthaxanthin, suggesting that HBFD1 might be the rate-limiting step in this strain.Likely, CBFD1 was the rate-limiting step in strain ASTA 3 as there was a significant amount of β-carotene accumulating in this strain (Figures 6 and A1).Fusions of CrtW/Z have been shown to be an effective strategy for increasing the conversion of zeaxanthin to astaxanthin by localizing the subsequent enzyme near the site of product formation [32,42].Similarly, fusion to membrane proteins for targeted localization also improved astaxanthin production [39].Chou et al. (2019) also found multiple promoters enhanced the biosynthesis of astaxanthin by increasing the efficiency of β-carotene conversion compared to using a single-operon system [31].From Table 2, it can be seen that the species or origin, copy number, promoter, and combination of upstream and downstream genes used to play a significant role in the overall productivity of the system.The highest astaxanthin content found to date was achieved in strains with changes to membrane morphology and higher reactive oxygen species (ROS) levels [30].However, these strains also exhibited decreased cell growth.Perhaps using CRISPR interference (CRISPRi) in a two-stage process might allow higher astaxanthin production after the majority of cell growth has occurred.A summary of this study and the changes and improvements made are shown in Figure 7.The accumulation of intermediate carotenoids in each strain also differed depending on the structure of the carotenoid operon(s).When cbfd1 was moved from the arabinose promoter to a stronger T7 promoter and when the operon was split into two operons controlled by two separate T7 promoters, astaxanthin production increased.In strain ASTA 9, there was no accumulation of canthaxanthin, suggesting that HBFD1 might be the rate-limiting step in this strain.Likely, CBFD1 was the rate-limiting step in strain ASTA 3 as there was a significant amount of β-carotene accumulating in this strain (Figures 6 and A1).Fusions of CrtW/Z have been shown to be an effective strategy for increasing the conversion of zeaxanthin to astaxanthin by localizing the subsequent enzyme near the site of product formation [32,42].Similarly, fusion to membrane proteins for targeted localization also improved astaxanthin production [39].Chou et al. (2019) also found multiple promoters enhanced the biosynthesis of astaxanthin by increasing the efficiency of β-carotene conversion compared to using a single-operon system [31].From Table 2, it can be seen that the species or origin, copy number, promoter, and combination of upstream and downstream genes used to play a significant role in the overall productivity of the system.The highest astaxanthin content found to date was achieved in strains with changes to membrane morphology and higher reactive oxygen species (ROS) levels [30].However, these strains also exhibited decreased cell growth.Perhaps using CRISPR interference (CRISPRi) in a two-stage process might allow higher astaxanthin production after the majority of cell growth has occurred.A summary of this study and the changes and improvements made are shown in Figure 7.
Conclusions
The IUP pathway significantly increased carotenoid production in E. coli in minimal media rather than complex media.There was an 11-fold increase in carotenoid yield in M9 media compared to LB media.The genes cbfd1/hbfd1 were capable of producing astaxanthin at a similar level to the CrtW/Z of bacterial origin.Similarly, the bottlenecks in the xanthophyll portion of the pathway were dependent on the promoters and operon organization of the carotenoid pathway genes and cbfd1/hbfd1, as seen in other reports.Future work elucidating the effect of growth media on overall productivity may provide insights that will improve astaxanthin production.Finally, future studies into possible
Conclusions
The IUP pathway significantly increased carotenoid production in E. coli in minimal media rather than complex media.There was an 11-fold increase in carotenoid yield in M9 media compared to LB media.The genes cbfd1/hbfd1 were capable of producing astaxanthin at a similar level to the CrtW/Z of bacterial origin.Similarly, the bottlenecks in the xanthophyll portion of the pathway were dependent on the promoters and operon organization of the carotenoid pathway genes and cbfd1/hbfd1, as seen in other reports.Future work elucidating the effect of growth media on overall productivity may provide insights that will improve astaxanthin production.Finally, future studies into possible combinations of CBFD1/HBFD1 and CrtW/Z enzymes with complementary specificities to alleviate possible bottlenecks in the xanthophyll portion of the pathway may be useful for increasing the proportion of astaxanthin produced without reducing the overall carotenoid productivity.
Table A2.List of primers used in the plasmid assemblies.The plasmid assembly pACT7-CBFD1 was used as an intermediary step to synthesize pAC-ASTA due to the large number of DNA fragments.
Figure 2 .
Figure 2. Plasmid designs used in this study for astaxanthin production.Gene organization in each operon are shown in the lefthand boxes while the biosynthesis precursors and products are shown in the righthand boxes.Upstream operons are located either on a plasmid or in the chromosome.Plasmids that form carotenoid intermediates can be transformed together to complete the pathway.
Figure 2 .
Figure 2. Plasmid designs used in this study for astaxanthin production.Gene organization in each operon are shown in the lefthand boxes while the biosynthesis precursors and products are shown in the righthand boxes.Upstream operons are located either on a plasmid or in the chromosome.Plasmids that form carotenoid intermediates can be transformed together to complete the pathway.
Figure 3 .
Figure 3.Total carotenoid production in strains 1-3 containing the wild-type (1), trcMEP (2), or IUP (3) pathway and the pAC-BETAipi and pCBFD1 plasmids.Cultures were grown in (A) LB media or (B) M9 media and induced with 0.1 mM IPTG.Cell growth by dry cell weight is plotted on the lefthand side.Total carotenoids were quantified, and carotenoid concentration (solid lines) and carotenoid content (dashed lines) are shown on the righthand plots.
Figure 3 .
Figure 3.Total carotenoid production in strains 1-3 containing the wild-type (1), trcMEP (2), or IUP (3) pathway and the pAC-BETAipi and pCBFD1 plasmids.Cultures were grown in (A) LB media or (B) M9 media and induced with 0.1 mM IPTG.Cell growth by dry cell weight is plotted on the lefthand side.Total carotenoids were quantified, and carotenoid concentration (solid lines) and carotenoid content (dashed lines) are shown on the righthand plots.
Bioengineering 2023 ,Figure 4 .
Figure 4. Total carotenoid production in strains 4-6 containing the wild-type (4), trcMEP (5), or IUP (6) pathway and the p5T7-Astaipi plasmid.Cultures were grown in (A) LB media or (B) M9 media and induced with 0.1 mM IPTG.Cell growth by dry cell weight is plotted on the lefthand side.Total carotenoids were quantified, and carotenoid concentration (solid lines) and carotenoid content (dashed lines) are shown on the righthand plots.
Figure 5 .
Figure 5.Total carotenoid production in ASTA strains containing the wild-type (7), trcMEP (8), or IUP (9) pathway and the p5T7-lycipi-ispA and pAC-ASTA plasmids.Cultures were grown in M9 media and induced with 0.1 mM IPTG.Cell growth by dry cell weight is plotted on the lefthand side.Total carotenoids were quantified, and carotenoid concentration (solid lines) and carotenoid content (dashed lines) are shown on the righthand plots.
Figure 6 .
Figure 6.Productivity and carotenoid composition of ASTA strains grown in M9 media.(A) Total productivity of each strain over a 48 h cultivation period.(B) Percent composition of carotenoids extracted from strains 3, 6, and 9 based on HPLC analysis.
Figure 5 .
Figure 5.Total carotenoid production in ASTA strains containing the wild-type (7), trcMEP (8), or IUP (9) pathway and the p5T7-lycipi-ispA and pAC-ASTA plasmids.Cultures were grown in M9 media and induced with 0.1 mM IPTG.Cell growth by dry cell weight is plotted on the lefthand side.Total carotenoids were quantified, and carotenoid concentration (solid lines) and carotenoid content (dashed lines) are shown on the righthand plots.
Figure 5 .
Figure 5.Total carotenoid production in ASTA strains containing the wild-type (7), trcMEP (8), or IUP (9) pathway and the p5T7-lycipi-ispA and pAC-ASTA plasmids.Cultures were grown in M9 media and induced with 0.1 mM IPTG.Cell growth by dry cell weight is plotted on the lefthand side.Total carotenoids were quantified, and carotenoid concentration (solid lines) and carotenoid content (dashed lines) are shown on the righthand plots.
Figure 6 .
Figure 6.Productivity and carotenoid composition of ASTA strains grown in M9 media.(A) Total productivity of each strain over a 48 h cultivation period.(B) Percent composition of carotenoids extracted from strains 3, 6, and 9 based on HPLC analysis.
Figure 6 .
Figure 6.Productivity and carotenoid composition of ASTA strains grown in M9 media.(A) Total productivity of each strain over a 48 h cultivation period.(B) Percent composition of carotenoids extracted from strains 3, 6, and 9 based on HPLC analysis.
Figure 7 .
Figure 7. Stepwise improvement of astaxanthin production and purity in the course of this study.
Figure 7 .
Figure 7. Stepwise improvement of astaxanthin production and purity in the course of this study.
Figure A1 .
Figure A1.HPLC analysis of strains ASTA 3,6, and 9 at 475 nm.Peaks corresponding to astaxanthin, isozeaxanthin*, canthaxanthin, and β-carotene are labelled.*isozeaxanthin was assumed to co-elute with zeaxanthin which was used as the standard for detection.
Figure A1 .
Figure A1.HPLC analysis of strains ASTA 3,6, and 9 at 475 nm.Peaks corresponding to astaxanthin, isozeaxanthin*, canthaxanthin, and β-carotene are labelled.* Isozeaxanthin was assumed to co-elute with zeaxanthin which was used as the standard for detection.
Table 1 .
Plasmids and strains used in this study.
Table 2 .
Summary of the carotenoid content and titres reported in the literature and the genes used in previous studies. | 8,171.8 | 2023-09-01T00:00:00.000 | [
"Environmental Science",
"Biology",
"Chemistry"
] |
BERxiT: Early Exiting for BERT with Better Fine-Tuning and Extension to Regression
The slow speed of BERT has motivated much research on accelerating its inference, and the early exiting idea has been proposed to make trade-offs between model quality and efficiency. This paper aims to address two weaknesses of previous work: (1) existing fine-tuning strategies for early exiting models fail to take full advantage of BERT; (2) methods to make exiting decisions are limited to classification tasks. We propose a more advanced fine-tuning strategy and a learning-to-exit module that extends early exiting to tasks other than classification. Experiments demonstrate improved early exiting for BERT, with better trade-offs obtained by the proposed fine-tuning strategy, successful application to regression tasks, and the possibility to combine it with other acceleration methods. Source code can be found at https://github.com/castorini/berxit.
Introduction
Large-scale pre-trained language models such as BERT (Devlin et al., 2019) have brought the natural language processing (NLP) community large performance gain but at the cost of heavy computational burden. While pre-trained models are available online and fine-tuning is typically done without a strict time budget, inference poses a much lower latency tolerance, and the slow inference speed of these models can impede easy deployment. It becomes even more difficult when inference has to be done on edge devices due to limited network capabilities or privacy concerns.
Early exiting (Schwartz et al., 2020;Xin et al., 2020a; has been proposed to accelerate the inference of BERT and models with similar architecture, i.e., those comprising multiple transformer layers (Vaswani et al., 2017) with a classifier at the top. Instead of using only one ⋮ ⋮ Figure 1: Multi-output structure of early exiting BERT.
classifier, additional classifiers are attached to each transformer layer (see Figure 1), and the entire model is fine-tuned together. At inference time, the sample can perform early exiting through one of the intermediate classifiers.
While existing early exiting papers provide promising quality-efficiency trade-offs, improvements are necessary for two important components: fine-tuning strategies and exiting decision making. In these papers, fine-tuning strategies are relatively simple and fail to take full advantage of the pretrained model's effectiveness; we propose a novel fine-tuning strategy, Alternating, for this multioutput model. Moreover, previous work makes exiting decisions based on the confidence of output probability distributions, and is hence only applicable to classification tasks; we extend it to other tasks by proposing the learning-to-exit idea. With carefully designed fine-tuning strategies and methods for making exiting decisions, the model can achieve better quality-efficiency trade-offs and can be extended to regression tasks.
We refer to our proposed ideas collectively as BERxiT (BERT+exit), and apply it to Muppets 1 including BERT, RoBERTa (Liu et al., 2019), and ALBERT (Lan et al., 2020); we also apply it on top of another BERT acceleration method, Distil-BERT (Sanh et al., 2019). We conduct experiments on datasets including classification and regression tasks, and show that our method can save up to 70% of inference time with minimal quality degradation.
Our contributions include the following: (1) an effective fine-tuning method Alternating; (2) the learning-to-exit idea that extends early exiting to tasks other than classification; (3) extensive experiments that show the effectiveness of our ideas and the successful combination of early exiting with other BERT acceleration methods; (4) additional experiments that provide insight into the inner mechanism of pre-trained models.
Related Work
BERT (Devlin et al., 2019) is a pre-trained multilayer transformer (Vaswani et al., 2017) model. RoBERTa (Liu et al., 2019) and ALBERT (Lan et al., 2020) are variants of BERT with almost identical model architectures but different training methods and parameter sharing strategies. In our paper, we refer to these transformer-based models as Muppets, and apply our method on them.
In the general deep learning context, there are a number of well-explored methods to accelerate model inference. Pruning (Han et al., 2015;Fan et al., 2020;Gordon et al., 2020) removes unimportant parts of the neural model, from individual weights to layers and blocks. Quantization (Lin et al., 2016;Shen et al., 2020) reduces the number of bits needed to operate a neural model and to store its weights. Distillation (Hinton et al., 2015;Jiao et al., 2020) transfers knowledge from large teacher models to small student models. These methods typically require pre-training Muppets from scratch 2 and produce only one small model with a predetermined target size. Early exiting requires only fine-tuning and also produces a series of small models, from which the user can choose flexibly. It extends the idea of Adaptive Computation (Graves, 2016) for recurrent neural networks, and is also closely related to BranchyNet (Teerapittayanon et al., 2016), Multi-Scaled DenseNet (Huang et al., 2018), and Slimmable Network (Yu et al., 2019).
Early exiting for Muppets has been explored by RTJ 3 (Schwartz et al., 2020), DeeBERT (Xin et al., 2020a,b), and FastBERT . Despite their promising results, there is still room for im-provement regarding the fine-tuning strategies of RTJ and DeeBERT. FastBERT, on the other hand, uses self-distillation (Zhang et al., 2019;Phuong and Lampert, 2019) for fine-tuning, which works well for small Muppets such as BERT BASE . However, our preliminary experiments 4 show that selfdistillation is unstable on larger Muppets such as BERT LARGE , suggesting that future work is necessary for fully understanding and robustly applying self-distillation on Muppets.
All these three methods make early exiting decisions based on confidence (or its variants) of the predicted probability distribution, and are therefore limited to classification tasks. Runtime Neural Pruning (Lin et al., 2017), SkipNet (Wang et al., 2018b), and BlockDrop use reinforcement learning (RL) to decide whether to execute a network module. Universal Transformer (Dehghani et al., 2019) and Depth-Adaptive Transformer (DAT, Elbayad et al., 2020) use learned decisions for early exiting in sequence-to-sequence tasks. Concurrently, PABEE proposes patience-based early exiting which is applicable to regression, but it relies on inter-layer prediction consistency and is therefore not very efficient for exiting at early layers. Inspired by them, we propose a method to extend early exiting for Muppets to regression tasks, using only layerspecific information as in classification. Moreover, our method requires neither RL nor complicated distribution fitting as in DAT, but uses a straightforward layer-wise certainty estimation, and achieves performance comparable with confidence-based early exiting on classification tasks.
Model Structure and Fine-Tuning
We start from a pre-trained Muppet model (the backbone model), attach additional classifiers to it, fine-tune the model, and use it for accelerated inference by early exiting.
Backbone model The backbone model is an nlayer pre-trained Muppet model. We denote the i th layer hidden state corresponding to the [CLS] token as h i : where x is the input sequence, θ i is the parameters of the i th transformer layer, and f i is the mapping from input to the i th layer hidden state.
Classifiers In the original BERT paper (Devlin et al., 2019), the way to fine-tune is to attach a classifier to the final transformer layer, and then to jointly update both the backbone model and the classifier. The classifier is a one-layer fullyconnected network. It takes as input the final layer hidden state h n and outputs a prediction. Its output is a probability distribution over all classes for classification tasks and a scalar for regression 5 tasks.
To enable early exiting, we instead attach a classifier to every transformer layer, i.e., there are n classifiers in total. Each classifier can make its own prediction, and therefore the model can accelerate inference by exiting earlier.
Fine-tuning strategies We discuss how to finetune this multi-output network. The loss function for the i th layer classifier is where x and y are the input sequence and corresponding label, g i the i th layer's classifier, w i the parameters of g i , and H the task-specific loss function, e.g., cross-entropy for classification tasks and mean squared error (MSE) for regression tasks. The most straightforward fine-tuning strategy is perhaps minimizing the sum of all classifiers' loss functions and jointly updating all parameters in the process. We refer to this strategy as Joint, and it is also used in RTJ (Schwartz et al., 2020): If we hope to preserve the best model quality for the final layer, the desired fine-tuning strategy is Two-stage, which is also used in DeeBERT (Xin et al., 2020a). The first stage is identical to vanilla BERT fine-tuning: updating the backbone model and only the final classifier. In the second stage, we freeze all parameters updated in the first stage, and fine-tune the remaining classifiers. Objectives in the two stages are as follows.
Stage 1: min Stage 2: min 5 In this case, we still refer to this one-layer network as classifier for naming consistency.
These two fine-tuning strategies are not ideal. Intuitively, in this multi-output network, loss functions of different classifiers interfere with each other in a negative way. Transformer layers have to provide hidden states for two competing purposes: immediate inference at the adjacent classifier and gradual feature extraction for future classifiers. Therefore, achieving a balance between the classifiers is critical. Two-stage produces final classifiers with optimal quality at the price of earlier layers, since most parameters are solely optimized for the final classifier. Joint treats all classifiers equally, and therefore its final classifier is less effective than that of Two-stage. To combine the advantages, we propose a novel fine-tuning strategy, Alternating. It alternates between two objectives (taken from Equation 3 and 4) for odd-numbered and even-numbered iterations.
Odd: min Even: min Combining objectives from Joint and Two-stage, Alternating has the potential to find the most preferable region in the parameter space: the intersection between optimal regions for different layers.
Exiting Decision Making
After the entire model (including the backbone and all classifiers) is fine-tuned, it can perform early exiting for an inference sample. In this section we discuss two methods to make exiting decisions.
Confidence Threshold
When the model is "certain" enough of its prediction at an intermediate layer, the forward inference can be terminated.
For classification tasks, a straightforward measurement of the prediction certainty is the maximum probability of the output prediction, which is referred to as confidence in previous work (Schwartz et al., 2020;. Similarly, Xin et al. (2020a) use entropy as the metric, which is also closely related to confidence. Before inference starts, a confidence threshold is chosen. In forward propagation, the confidence of the output at each layer is compared with the threshold; if it is larger than the threshold at a certain layer, the sample exits and future layers are skipped.
Learning to Exit
While using a confidence or entropy threshold is straightforward and effective, it is exploiting the fact that the classifier's output is a probability distribution in classification tasks. This is generally not the case for other tasks such as regression. To address the gap, we propose learning-to-exit (LTE) as a substitute when the distribution is unavailable.
The i th layer hidden state h i is a vector in the embedding space. Intuitively, different regions of the embedding space have different certainty levels. For instance, in binary classification tasks, regions closer to the decision boundary have a lower certainty level, and this is explicitly expressed as a lower confidence of the output probability distribution. But even when certainty cannot be explicitly measured, we can still train an auxiliary LTE module to estimate such a metric.
Concretely, the LTE module is a simple onelayer fully-connected network. It takes as input the hidden state h i and outputs the certainty level u i of the sample at the i th layer: where σ is the sigmoid function, c is the weight vector, and b is the bias term. The loss function for the LTE module is a simple MSE between u i and the "ground truth" certainty level at the i th layerũ i : For classification, the ground truth certainty level is whether the classifier makes the correct prediction: where g i is the output probability distribution at the i th layer and g (j) i is its j th entry. For regression, the ground truth certainty level is negatively related to the prediction's absolute error: To apply LTE, we initialize the LTE module together with classifiers and it is shared among all layers. We train the LTE module jointly with the rest of the model by substituting L i in Equation 3-7 with L i + J i . At inference time, if the predicted certainty level is higher than the chosen threshold, the inference sample performs early exiting.
Setup
We conduct experiments on six classification datasets of the GLUE benchmark (Wang et al., 2018a); since there is only one regression dataset, STS-B (Cer et al., 2017), in GLUE, we additionally use another regression dataset, SICK (Marelli et al., 2014). Statistics of these datasets are listed in Table 1. Our implementation is adapted from the Huggingface Transformer Library (Wolf et al., 2020). We conduct searches on experiment settings such as the optimizer, learning rates, hidden state sizes, and dropout probabilities, and discover that it is best to keep original settings from the library. Random seeds are also unchanged from the library for fair comparisons. 6 Most results in this paper use the dev split, since the large number of evaluations we need are forbidden by the GLUE evaluation server. The only exception is Table 2, where we report model qualityefficiency trade-offs on the test split.
Layer-wise Scores Comparison
We discuss three fine-tuning strategies in Section 3: Joint (also used in RTJ), Two-stage (also used in DeeBERT), and Alternating (proposed in this paper). In tables and figures, they are labeled respectively as JOINT, 2STG, and ALT. Figures 2 and 3 compare these three fine-tuning strategies by showing their layer-wise score curves: each point in the curve shows the output score at a certain exit layer, i.e., all samples are required to exit at this layer for evaluation. More specifically, we report relative scores, and the 100% baseline is the original score of the vanilla Muppet without early exiting, and this is also the score of the final layer of Two-stage because of parameter freezing in its second stage.
For BERT BASE , we show plots for all six classification datasets, ordered by their training set sizes from smallest to largest. As we will see in later analyses, low-resource datasets show the most difference. Therefore, for RoBERTa BASE and ALBERT BASE , we only show plots for RTE and MRPC (with training set size smaller than 6% of others) due to space limitations. 7 We observe the following from the figures: • Two-stage is unsatisfying. While it achieves the best score at the final layer, it comes at a large cost of other layers, especially for non-lowresource datasets.
• Alternating is better than Joint in later layers, and weaker in earlier layers. However, as we will see in the next section, when we evaluate qualityefficiency trade-offs of confidence-based early 7 Results for other datasets are in Appendix C. exiting, the weakness of Alternating in earlier layers is no longer substantial, while its advantage is preserved.
• The difference between Joint and Alternating is larger for low-resource datasets, where the training set is insufficient to fine-tune all layers well simultaneously.
• Interestingly, for ALBERT BASE , Alternating's relative scores are higher than 100% in the final layers. We speculate that this is because of the parameter sharing nature of ALBERT and the small sizes of the datasets: better supervision for intermediate layers also helps the final layer.
Early Exiting Trade-offs Comparison
From the previous section, we see that Two-stage is visibly less preferable than the other two. Therefore in this section, we compare quality-efficiency trade-offs of Joint and Alternating when confidence threshold is used for making exiting decisions. Specifically, we use average exit layer of all inference samples as the metric of efficiency for the following reasons: (1) it is linear w.r.t. the actual amount of computation; (2) according to our experiments, it is proportional to actual wall-clock runtime, and is also stable across different runs. 8 We visualize the trade-offs in Figures 4 and 5, and also show detailed numbers in Table 2 using results from the test set. Dots in the figures and ALT rows in the table are generated by varying the confidence threshold, and the thresholds are chosen to show trade-offs at different average exit layers. In addition to the comparison between Joint and Alternating, we add another strong baseline, Distil-BERT (Sanh et al., 2019). We apply Alternating fine-tuning and early exiting on top of DistilBERT (labeled as DB+ALT), and the rightmost point of the curve is DistilBERT itself without early exiting (the green ). Observations from the table and figures are as follows: • On the test set, early exiting with Alternating fine-tuning saves a large amount of inference computation, with only minimal quality degradation, compared with vanilla Muppets. • Compared with Joint, Alternating inherits its by other processes on the same machine. Detailed discussions can be found in Appendix D. benefits from the previous section: better tradeoffs at higher scores (larger average exit layer). Additionally, its improvements are larger in smaller datasets. • Alternating's weakness at more aggressive exiting (smaller average exit layer) is minimized. Take Figure 4 as an example, we report the area of one curve above the other as a numerical metric: JOINT over ALT and ALT over JOINT is respectively (0.4, 13.5) for RTE, (0.9, 8.8) for MRPC, and (0.2, 18.2) for SST-2. The advantage of Alternating indicates that later layers intrinsically contribute more to early exiting performance, partly because the final layer's score is the upper bound for all previous layers (ignoring randomness in training). This shows that the Joint fine-tuning strategy, which treats all layers equally, is not ideal. • In most cases, Alternating outperforms Distil-BERT, which requires distillation in pre-training and is therefore much more resource-demanding.
It also further improves model efficiency on top of DistilBERT, indicating that early exiting is cumulative with other acceleration methods.
Learning to Exit Performance
To examine the effectiveness of LTE, we apply it on top of models fine-tuned with Alternating. We show the results in Figure 6 on four datasets. We use the layer-wise score of Alternating as the baseline: if we want to save x% inference runtime, a straightforward way is to use the first (100 − x)% layers for every sample, regardless of its difficulty. LTE is expected to dynamically allocate resources based on a sample's difficulty and therefore outperform this baseline. From the figures for QNLI and QQP, we observe that the blue curves are substantially above the orange curves, i.e., LTE provides better accuracy-efficiency trade-offs than the layer- wise baseline, achieving the same model quality with less computation. For regression tasks STS-B and SICK, the layer-wise baseline reaches its maximum score at relatively early layers, leaving little room for LTE to perform. Nevertheless, LTE still outperforms the baseline, especially in earlier layers (note that the y-axis is from 0 to 100%).
We also compare LTE with the concurrent patience-based baseline PABEE in Table 3, showing their speedups and average exit layers at the same relative scores. PABEE does not provide exact speedup numbers; therefore we estimate the values from their figures. We can see that Alternating fine-tuning plus LTE is marginally better than PABEE on regression tasks.
We further compare LTE-predicted certainty for each layer with layer-wise scores in Figure 7, where we observe large differences of predicted certainty both within and across layers. Also, predicted certainty is generally positively correlated with scores. This further demonstrates that the LTE module successfully captures certainty information based on the model's hidden state.
LTE extends confidence-based early exiting to tasks other than classification. Furthermore, our LTE module is more straightforward and intuitive than DAT (Elbayad et al., 2020), yet achieves comparable results with classification tasks.
Prediction Confidence as a Probe
So far, we have only regarded prediction confidence as something produced by the black-box model and use it for making early exiting decisions. In this section, we show an example of how confidence is related to a human-interpretable feature, demonstrating its potential to reveal the inner mechanisms of Muppet models. We choose two datasets, MRPC and QQP, where the task is to predict whether two input sequences are semantically equivalent. Intuitively, the BLEU score (Papineni et al., 2002) between the two sequences, which measures n-gram matching, may be related to the prediction. At each output layer, we first divide all dev set samples into two subsets by whether they are predicted as positive or negative; then, we calculate the BLEU-4 score for each sample, and calculate the Pearson correlation between BLEU scores and confidence in each subset; finally, we compare the correlation for both subsets in each layer, along with the layer-wise relative scores, in Figure 8.
We notice that the BLEU scores and predicted confidence show the strongest correlation in layers where the model quality starts to improve (layer 4-5 in MRPC 9 and 2-3 in QQP). After these layers, the correlation gradually weakens. It suggests that in the early layers, the model relies more on simple features such as n-gram matching for making semantic judgments: the higher the BLEU score is, the more certain it is for making positive predictions and the less certain it is for negative ones; however, with more layers, the model acquires the ability to look beyond the BLEU score, reducing its reliance on n-gram matching and achieving better performance. Therefore, with MRPC as an example, analyzing differences between layers 3 and 4 may reveal how the model detects n-gram matching, and analyzing differences between layers 4 and 6 may reveal advanced semantic features learned by the model.
Conclusion and Future Work
To improve early exiting for Muppets, we present BERxiT, including the Alternating fine-tuning strategy which outperforms methods from previous papers and the LTE idea which extends early exiting to a broader range of tasks. Experiments show the effectiveness of Alternating in providing better quality-efficiency trade-offs and the successful application of LTE to regression tasks. They also show that early exiting is cumulative with other acceleration methods such as DistilBERT and has the potential for model interpretation.
Future Work The fundamental question of early exiting for Muppets is how many transformer layers are sufficient for making good predictions. We draw inspiration from the Limit performance of Muppets: the score of Limit at the i th layer is obtained by taking the first i transformer layers from the pre-trained Muppet model, attaching a classifier to the i th layer, and fine-tuning this single-output Figure 9: Comparison between LIMIT of BERT BASE (brown) and BERT LARGE (blue). Red arrow: difference between the two models when they both use 12 layers. model. Limit estimates the upper bound for any fine-tuning methods by removing inter-classifier interference. We compare the Limit performance of BERT BASE and BERT LARGE in Figure 9, and notice that with the same number of layers (and identical fine-tuning strategy), BERT BASE almost always outperforms BERT LARGE by a large margin. This suggests that most transformer layers' potential to provide information for early exiting is limited by the single-output nature of pre-training. If we want to further improve early exiting Muppets for better trade-offs, adding more exiting paths in pretraining would be a promising direction.
A Negative Results for Self-Distillation
FastBERT does not provide result for BERT LARGE or RoBERTa LARGE . We show BERT LARGE and RoBERTa LARGE layer-wise scores for different fine-tuning strategies in Figure 10. SD in the legend stands for self-distillation. We can see that for Two-stage and Alternating, the patterns are similar to those of BERT BASE : Alternating better in earlier layers while Two-stage better in later layers.
However, self-distillation's behavior is inconsistent between models and datasets. While it performs as expected for BERT LARGE in SST-2 and MNLI, and for RoBERTa LARGE in MRPC, selfdistillation fails to improve after the first few layers for BERT LARGE in MRPC and for RoBERTa LARGE in SST-2 and MNLI, and most layers' quality is considerably worse than Alternating. We therefore consider self-distillation an unstable and premature fine-tuning strategy.
B Additional Experiment Setting
For pre-trained models, we use the following ones provided by the Huggingface Transformer Library (Wolf et al., 2020) as backbone models: • DISTILBERT-BASE-UNCASED For BERT, ALBERT, and DistilBERT, we finetune for 3 epochs; for RoBERTa, we fine-tune for 10 epochs; no early-stopping or checkpoint selection is performed.
Experiments are done on a single NVIDIA P100 GPU with CUDA 10.1. For inference, we use a batch size of 1 (since we need to perform early exiting based on each individual sample's difficulty). Inference runtime for the entire dev set for all models and datasets is shown in Table 4. RoBERTa has the same model structure as BERT, and therefore its runtime is also very close to that of BERT. Note that this is affected by competing processes, and may vary between different runs.
Numbers of parameters for BERT and ALBERT backbone models can be found in the paper by Lan et al. (2020). RoBERTa shares the same model structure with BERT and has the same number of parameters. Numbers of parameters for earlyexiting-specific modules, such as additional classifiers and the LTE module, are on the order of thousands, and are therefore negligible compared with those of backbone models (millions).
C Additional Experiment Results
In the main paper, we report results of RoBERTa BASE and ALBERT BASE only on the two smallest datasets. Results of the other datasets are provided in Figure 11. We can see that while Two-stage is visibly less preferable, Joint and Alternating are close to each other with larger dataset sizes, and this is the reason why we keep only the low-resource datasets in the main paper.
D Analyses of Efficiency Metric
In our experiments, we use average exit layer as the metric of efficiency for the following three reasons.
It is linear w.r.t. the amount of computation. Inference time computation in our model occurs in the following parts: the embedding layer, transformer layers, classifiers, and the LTE module (if used). If a layer is chosen, i.e., the exit layer is after it, all components of the layer (transformer, classifier, LTE module) are used and incur computation cost. Additionally, embedding look-up (selecting a column in the matrix) is much faster than the above components (involving matrix-vector multiplication), and can therefore be neglected.
It is stable across different runs. With a finetuned model, an inference sample's exit layer only depends on the confidence (or LTE-predicted certainty) at each layer and the threshold. On the other hand, direct measurement of wall-clock runtime is frequently affected by competing processes and fluctuates between different runs.
The computation overhead of early exiting is negligible. With the above reasons, there is only one concern left for using average exit layer as the efficiency metric: how do additional layers in our model (including the additional classifier and possibly the LTE module) compare with transformer layers in the original BERT paper? We estimate FLOPS used in one sample's inference as follows.
Since we will eventually end up with orders of magnitude differences, we use the big-theta asymptotic notation for estimation. Most computation is incurred for matrix and vec-tor multiplication. Using the naïve implementation, the cost for multiplying two vectors in R d is Θ(d), and the cost for multiplying a matrix in R d 1 ×d 2 and a vector in R d 2 is Θ(d 1 d 2 ). We denote d as the hidden state size of our model (768 for base models and 1024 for large models), c as the number of classes (less than 4 in our experiments), n as the sequence length (typically in the hundreds), and h as the number of heads in multi-head attention (12 for base and 16 for large).
The classifier is a one-layer fully-connected layer, mapping a vector in R d to an output in R c , therefore the cost is Θ(cd). Similarly, the cost of the LTE module is Θ(2d), since its output is always a vector in R 2 .
The transformer layer mainly consists of multihead self-attention, a fully-connected layer, and two layer normalization modules. Layer normalization is much faster than the other two and we therefore neglect it. The fully connected layer maps n vectors from R d to R d , therefore the cost is Θ(nd 2 ). The multi-head attention 10 computes h individual uni-head attention. For each uni-head attention, the mapping from original query, key, and value vectors (R d ) to head-specific ones (R d/h ) incurs a cost of Θ(nd 2 /h); calculating the attention results incurs a cost of Θ(nd/h). Therefore the total cost here is Θ(nd 2 + nd) = Θ(nd 2 ). Finally, results of each head are combined and one more matrixvector multiplication is needed, incurring a cost of Θ(nd). The total cost of one transformer layer is therefore Θ(nd 2 ).
Considering the value of n and d, the classifier and the LTE module are several orders of magnitude lighter than the transformer layer. Even with advanced algorithms and parallel hardware that may accelerate transformer layers, we can still safely come to the conclusion that the computation overhead of early exit is negligible. | 6,797 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Navigating the intersection of 3D printing, software regulation and quality control for point-of-care manufacturing of personalized anatomical models
3D printing technology has become increasingly popular in healthcare settings, with applications of 3D printed anatomical models ranging from diagnostics and surgical planning to patient education. However, as the use of 3D printed anatomical models becomes more widespread, there is a growing need for regulation and quality control to ensure their accuracy and safety. This literature review examines the current state of 3D printing in hospitals and FDA regulation process for software intended for use in producing 3D printed models and provides for the first time a comprehensive list of approved software platforms alongside the 3D printers that have been validated with each for producing 3D printed anatomical models. The process for verification and validation of these 3D printed products, as well as the potential for inaccuracy in these models, is discussed, including methods for testing accuracy, limits, and standards for accuracy testing. This article emphasizes the importance of regulation and quality control in the use of 3D printing technology in healthcare, the need for clear guidelines and standards for both the software and the printed products to ensure the safety and accuracy of 3D printed anatomical models, and the opportunity to expand the library of regulated 3D printers. Supplementary Information The online version contains supplementary material available at 10.1186/s41205-023-00175-x.
Background to 3D Printing anatomical models
3D printing, more accurately known as additive manufacturing, is playing an increasingly disruptive role in healthcare [1].Broadly speaking, the fabrication technology uniquely lends itself to the clinical need to fabricate one-off products matching individual patient anatomy, and does not require high volumes to breakeven as per traditional manufacturing [2].3D printing techniques rely on the additive deposition or fusion of material, layer-by-layer, to form 3D objects.This additive manufacturing paradigm unlocks tremendous design freedom and makes the technology ideally suited for fabricating patient-specific anatomic models or devices that typically entail complex geometries.3D printing software often requires a CAD model as the input, which is 'sliced' into 2D layers and sequentially printed to form the 3D object [3].
Over the last decade, 3D printing is being increasingly used for fabricating 3D models of patient anatomy, providing an added dimension to medical scan data visualization previously unachievable at the point-of-care using screen-based visualization technologies [4].Advances in accessible 3D printing technology, in parallel to data handling and integrated storage systems, known in healthcare settings as 'picture archiving and communication systems' (PACS), are enabling hospitals and healthcare facilities to now rapidly translate imaging data out of the digital domain and into the physical domain (Fig. 1) [5].To produce a 3D printed model from patient scan data, one must first obtain the scan data in a compatible format, such as a DICOM file, generated as the output viewing format from a variety of medical imaging techniques, such as computed tomography (CT) or magnetic resonance imaging (MRI) (Fig. 1, SCAN) [6].Next, the scan data must be digitally segmented, which involves isolating and extracting the relevant anatomy from the rest of the scan data and background.This can be done by manually selecting the regions of interest on successive images, or through the use of automated algorithms or artificial intelligence (AI) driven tools that can extrapolated between multiple slices with a high degree of accuracy [7,8].Frequently, segmentation is performed using a combination of automated and manual tools (semi-automatic).Once the relevant anatomy has been isolated, it can be processed and converted into a format that can be used by a 3D printer, typically an STL or OBJ file (Fig. 1, MODEL).Finally, the 3D printer can be used to fabricate the physical model using a variety of materials, most typically plastics fabricated via stereolithography (SLA), fused filament fabrication (FFF) or binder jetting (BJ) due to low cost and accessibility in standard lab settings (Fig. 1, PRINT) [9].
3D printed models of regions of patient anatomy have many, often interchangeable names, such as "surgical planning models", "anatomic models", "medical models" or, common to regulatory information, "physical replicas of 3D models" referring to the physical production of models from digital 3D models generated using 3D modelling software [3,[10][11][12].In this article, "3D printed anatomical models" has been adopted as a general and universally inclusive term for these models, regardless of application or intended use.
Due to their use in healthcare, with the opportunity to inform patient diagnosis, management, or treatment as diagnostic tool, these 3D printed anatomical models are of interest to regulatory bodies such as the US Food and Drug Administration (FDA).Currently, whilst 3D printed anatomical models prepared at the point-of-care are not currently considered medical devices themselves, the FDA has required that any 3D printed anatomical models marketed for diagnostic use, meaning those advertised for sale for the purposes of being used by a healthcare professional to diagnose a condition, should be prepared using software that has received FDA clearance [11].Therefore, only a limited number of software platforms exist that have suitable clearance for the generation of anatomical models that can be produced in combination with validated 3D printers.Whilst the intended use of the software to produce physical replicas for diagnostic use is contained within a software's 510(k) clearance documentation, there is no consolidated reporting mechanism for the specific combination of 3D printers and materials Fig. 1 Overview of the process to design and fabricate 3D printed anatomical models, including acquisition of patient scan data in the form of DICOM, segmentation of the anatomy of interest, 3D modelling of the anatomy and CAD, 3D printing of a physical part and post-processing to clean, cure or remove support structures as necessary.Validation between specific outputs during the workflow is used to confirm the accuracy of specific processes that have been validated using that software and details are sparsely reported by individual software or 3D printer manufacturers.Further, this list of cleared printers and materials in combination with the segmentation software is often developed for specific clinical indications and/or anatomic regions.This information is vital to healthcare professionals seeking to adopt 3D printing into surgical planning workflows and expand the accessibility of 3D printed anatomical models to improve patient care.
The current absence of a consolidated list containing information on cleared software and validated 3D printer combinations impairs accessibility and understanding of the landscape of 3D printing workflows suitable for clinical use.Therefore, the aim of this review article is to comprehensively survey software platforms that have been cleared by the FDA for the production of 3D printed anatomical models, alongside the range of 3D printers that have been validated for use to produce 3D printed anatomical models for diagnostic use.Additionally, this review aims to examine the suitability of current verification and validation methodology for the generation of such models, as well as to explore the potential for expanding the range of 3D printers that are validated for use with approved software.
US Software regulation for radiological software
Like many software platforms used in healthcare, 3D modelling software used to translate patient scan data into 3D models suitable for 3D printing is regulated by the FDA if they are intended to be used for diagnostic or therapeutic purposes [13].Given the similarities in functionality to generic radiographic software, both types of software are used to create visual representations of medical data that can be used for diagnostic or therapeutic purposes, and as such, they have the potential to significantly impact patient health and treatment.In terms of their risk profile, radiographic software, as well as those with 3D printing-specific outputs, are generally classified as moderate risk (Class II) medical devices with in the 'LLZ' classification product code, depending on their intended use and the potential for harm if they do not function correctly.This process typically involves submitting a premarket notification, also known as a 510(k), to the FDA, which includes data demonstrating the safety and effectiveness of the software compared to an existing product on the market, known as a 'predicate' .The FDA reviews this data and determines whether the software meets the necessary standards and can be cleared for sale.Alternatively, if a product has new features for which there is no predicate device already on the market, other application pathways may be required, such as de novo applications.The requirement for new software platforms to be subjected to some form of regulatory oversight is important because the use of 3D printed anatomical models produced from digital 3D models generated using these software platforms can have significant consequences for patient health and treatment if used for diagnosis or surgical decision making, and it is important to ensure that they are produced reliably and accurately.
FDA-Cleared software for producing 3D printed models
Currently, there are seven software platforms on the market that have FDA clearance for producing 3D printed anatomical models.Table 1 summarizes these software platforms, with reference to FDA clearance documentation provided in Reference column.Each of these software include the generation of 3D printed anatomical models within their 'intended use' in combination with specific 3D printer brands, listed in column 3. 3D printed anatomical models produced using five of the software platforms have been cleared for diagnostic use "in conjunction with other diagnostic tools and expert clinical judgement" [14] for a range of clinical applications, namely orthopaedics (also referred to as musculoskeletal), craniomaxillofacial (incl.craniofacial and maxillofacial), and cardiovascular areas.However, AVIEW Modeler (Coreline Software Company) and Simpleware ScanIP (Synopsis) may only be used for "visualization and educational purposes" and do not currently possess clearance for diagnostic use.This means the models cannot be used by a healthcare professional to diagnose a patients' condition based on the 3D printed model, however they may still be used for other activities within a healthcare setting such as surgical training and patient education [15,16].
Materialise products (Mimics, Mimics InPrint and Mimics Medical) have played a critical role in establishing a benchmark for the safety and efficacy of these software platforms, with all other software platforms using a Materialise product as either a predicate or reference device for comparison of their safety and performance, and assessment of substantial equivalence (Fig. 2).Their 3D visualisation technology is underpinned by their platform 3D image viewing and surgical planning software developed in the 1990s for dental surgery applications.SIMPLANT remains in routine clinical use for dental surgery planning and surgical guide design after being acquired by a US dental equipment manufacturer, Dentsply Sirona [27].
The selection of validated 3D printers has largely been established through partnerships between software and 3D printing hardware manufacturers [21,25], leading to a bespoke list of 3D printers being available for use in a validated and 'on-label' context.This list of 3D printers introduced in Table 1 has been expanded and reorganized in Table 2 to further explore trends in the growing selection, fabrication modalities and material availability.FormLabs and Stratasys are the most widely validated 3D printer brands, with their vat polymerization (VP) and material jetting (MJ) technology being marketed and applied widely for their capacity to produce accurate, flexible, multicoloured, or multi-component anatomical models [28][29][30].Whilst the mean cost for one of the printers on the list is just under $100,000 USD ($98,612.50USD, n = 16), several low-cost 3D printers are available, including the Ultimate S5 fused filament fabrication (FFF) system for use with PLA within the category of material extrusion (MEX) which, importantly, does not require any peripheral post-processing materials necessary for VP fabrication [30].However, variation in the surface quality and material finish of each technique may render some techniques more suitable that others in addition to the accessibility of the price point.Intuitively, as 3D Systems is the only company to appear on both the list of software manufacturers and 3D printer manufacturers, they have exclusively validated their D2P software with several of their 3D printers [19].Several printers on the list, including the FormLabs Fuse 1, HP580, 540, and, 3D Systems ProX SLS 6100, are capable of fabricating parts from nylon (PA11 or PA12) which is commonly used as a biocompatible material for tissue-interfacing applications such as surgical guides [31], however the regulatory complexities for producing such surgical tools extend beyond the scope of the aforementioned indications for use for anatomical models.
In addition to the seven software platforms mentioned in Table 1, there are other programs that have similar capabilities for converting patient scan data into digital 3D models that can be used for 3D printing.However, these software platforms do not specifically describe the physical fabrication of models as an intended use of the software in their FDA clearance documentation (Table S1).These platforms include Advantage Workstation (AW) (GE Heathcare), that has been validated with Formlabs FORM 3B and 3BL printers [36], and Vitrea Advanced Visualization (Canon), validated with Stratasys Objet260 Connex3.IntelliSpace Portal 10 (Philips) and Synapse 3D (FUJIFILM) both market their software with 3D printing output capability [37,38], whilst Dolphin 3D Surgery (Patterson Dental Supply), iNtuition (TeraRecon), Osirix MD (Pixmeo Sarl) and Syngo.via(Siemens) have demonstrated use for producing 3D printed anatomical models in the academic literature [39-43] (Table S1).
It is also necessary to distinguish between 3D printed anatomical models produced by a manufacturer for sale in the US, compared to those produced in-house by a hospital or other healthcare provider that are not marketed and sold.FDA regulation currently extends only to products produced for marketing and sale in the US and therefore, whilst it is best practice for hospitals producing 3D printed anatomical models to follow the FDA guidance requiring 3D printed anatomical models to [15] [32,33] be produced using cleared software, it is not presently a requirement.This nuanced guidance from the FDA is likely to undergo significant change over the coming years as the role of medical device manufacturer is clarified in the context of the growing trend and return towards point-of-care manufacturing [44].Thought leaders in the 3D printing for medical application space strongly advocate for the use of approved software coupled with validated 3D printers in the interests of maintaining "very high standards" and minimizing risk to patient safety [45].
Inaccuracies in model design & fabrication
Reproducible dimensional accuracy is crucial for quality control of 3D printed anatomical models, particularly since they may be used to inform diagnosis and surgical decision-making that may impact patient safety and quality of care.Since these models are not considered medical devices, no harmonized quality control standards currently exist.Research teams and 3D printing facilities around the world have therefore developed and reported a variety of quality management methods, focusing on establishing reproducible dimensional accuracy of 3D printed parts.Dimensional accuracy is defined as the agreement between the measured and designed dimension of the 3D-printed part [46], and has vital clinical relevance for the quantitative use of these 3D models for characterising pathologies, such as tumours, aneurysms or other pathologies where dimensional fidelity strongly determines treatment pathway and prognosis.Therefore, each stage of the 3D printed anatomical model generation workflow (Fig. 1) requires careful analysis to determine the presence of controlled or uncontrolled sources of inaccuracy and therefore motivation for regulatory oversight.
Firstly, the image quality generated from CT and MRI scanning modalities is largely well-characterised, however the impact of imaging quality and parameters such as the choice of reconstruction kernel or slice reconstruction interval (SRI) have been shown to impact the mean absolute error between original models and 3D printed models [47].Next, the digital process steps have the potential to introduce inaccuracy in the model design and interpretation of anatomical structures, particularly when performed by non-experts [48,49].Figure 3 demonstrates the source of estimation and inaccuracy between the original CT scan data of a femur versus the segmentation selection, 'part' and exported STL file.Whilst little difference is perceivable in the macroscopic views of the 3D models, at high magnification, the interpretation of the segmented pixel selection into a part and STL file yields a potential source of inaccuracy between the patient anatomy and produced model (Fig. 3).Several CAD tools are commonly used to prepare the part for final production, including the use of 'wrap' tools to close small holes in the 3D model, or mesh reduction to reduce and improve the quality of triangles comprising the STL model.These tools, in combination with the vast range of adaptation and manipulation tools available in CAD software such as 3-matic (Materialise) may impact the quality and accuracy of the 3D model compared to the patient Fig. 3 Comparison of 3D model morphology of a femur at high magnification.CT scan data (greyscale) was segmented (red) in Mimics (Materialise), converted to a 'part (green) and exported to an STL file (blue) after 'wrapping' and floating body removal anatomy and original scan data.This is consistent with previous reports demonstrating that different segmentation and part generation algorithms produce models with statistically significant variation in physical dimensions [50,51].This also further reinforces the accepted standard of practice for point-of-care 3D printing facilities to use software platforms cleared by the FDA in combination with validated 3D printers, since critical inaccuracies could step from several aspects of the workflow when using non-cleared and validated products, particularly when performed by non-radiologists, such as 3D printing technicians that do not have formal medical training.
Finally, dimensional accuracy of the final 3D printed models may be evaluated using a range of technologies, including callipers, photographic measurements, surface scanning, photogrammetry, coordinate measuring machines (CMMs), or CT scans, summarised in Fig. 4 [42].Many studies evaluating accuracy focus on a single pathology or region of anatomy [9,42,47], and it has been highlighted that further research is needed to evaluate the accuracy of anatomic models across a more generalised range of anatomical regions [46].
Validation & quality control methods
Whilst formalised quality control systems for 3D printing anatomical models in hospital have not yet been mandated by the FDA, several methodologies have been proposed in the academic literature, ranging from versatile guidance for routine manufacturing workflows, through to systematic academic studies reporting vital fundamental validation where the true anatomical accuracy has been directly measured from cadaveric samples [42,52].Since the true patient anatomy is rarely accessible during routine clinical cases, the DICOM scan data is widely accepted as the ground truth, to which the STL file and 3D printed part are compared (Fig. 1).Comparison of the DICOM file to STL file provides validation information on the accuracy of the segmentation and CAD processes, validating the software tools used to generate the digital 3D model.This validation is included in the validation Fig. 4 Summary of accuracy measurement techniques for validating the fabrication of 3D printed anatomical models.Linear measurements of anatomical features may be taken from a 3D scan of the 3D printed model or the physical model itself (blue) [42,46,52,53], whilst optical or laser surface scanning allows 2D surface comparisons between anatomical features in the original scan data, STL file and physical model (green) [9,42,54,55].Finally, a 'residual volume' metric is proposed for 3D quantification of model accuracy (pink) [56] and verification testing performed by FDA-cleared software platforms listed in Table 1 and validates the suitability of these platforms to accurately translate the 3D scan data into 3D models.At this stage, radiologist oversight is recommended to ensure the quality of the digital model [57].Next, the STL file is 3D printed to generate the physical model, the accuracy of which compared to the STL file is intrinsic to the 3D printer itself, the material, the paired slicing software, printing mechanism, upkeep and maintenance, and may not be specific to the design being printed.This should be independently and routinely validated using standardized models using manufacturer-specific guidance [58].Full process validation is therefore critical, ensuring that the final printed product is within an acceptable tolerance from the original DICOM data (Fig. 1).
Since the DICOM file (sliced 2D images), STL file (3D digital model) and final printed part (3D physical model) exist in different spatial as well as physical or digital domains, several metrics for comparison have been utilized: 1D linear measurements, 2D surface measurements, and 3D volumetric measurements (Fig. 4).Measurements on the final 3D printed part may be performed directly, in the case of linear measurements using callipers, or via re-visualization of the part using 3D surface scanning, such as optical, photogrammetry or laser scanning, or CT scanning, offering a continuum of spatial information at a variety of resolutions depending on the specific equipment used [59].
Industry leaders have widely supported the use of callipers to perform linear measurements directly on 3D printed outputs compared to digital linear measurements performed on the DICOM and STL files for routine quality control [46,60].These measurements are routinely performed on macroscopic dimensions of large components or wall thicknesses of hollow or tubular structures.These measurements may be compared to the STL file or original DICOM dataset, as shown in Fig. 1, with a tolerance of < 1 mm deviation between physical model and original data widely considered to be acceptable in the literature for diagnostic models [9,46,61].However, such measurements on specific anatomical features of personalized models cannot be readily compared between cases.Therefore, the inclusion of standardized 'landing blocks' of a specific dimension added into the 3D model has been proposed by Ravi et al. (2022) to enable reproducible and comparable measurements between models of varying geometry and clinical application [46].The tolerance threshold is much higher for devices such as anatomic guides that have to fit on the target bony anatomy compared to anatomic models used for diagnostic purposes.Other more comprehensive techniques such as surface and volume measurements based on scans of the physical part play a vital role in process establishment, enabling comparison from digital scan data of the printed product compared to segmentation and STL data.These techniques are comprehensive and enable accuracy characterisation of the accuracy of all features of the part, notably thin internal features that may be inaccessible for physical measurement.However, their role in routine quality control may be limited due to cost and time inefficiency compared to physical measurements with callipers [56].
3D Printed anatomical models driving hospital-based manufacturing
As technology and the technological competency of healthcare providers for producing 3D printed anatomical models continue to advance, it is likely that FDA guidance will evolve to reflect these changes.The FDA may consider several dynamic factors when updating its guidance in the coming years, including the development of new applications, validation techniques, feedback from key stakeholders, such as surgeons, 3D printing experts and patient groups, as well as changes in the international regulatory landscape.This is particularly pertinent given the proximity of the technologies underpinning 3D printed anatomical model manufacturing to those capable of producing other personalised medical devices and equipment that fall under medical device manufacturing regulation.
The growing demand for personalised medical devices such as surgical implants has strongly driven the requirement for point-of-care manufacturing, both to minimize lead times for manufacturing personalized devices, as well as cybersecurity concerns to reduce data-sharing with third parties outside of the healthcare providers' systems in the process of designing and manufacturing personalized devices.These new challenges intrinsic to the technological capability offered by 3D printing for producing personalized devices are stimulating a growing conversation within regulatory bodies to reconsider how healthcare providers can also act as medical device manufacturers.
Availability of 3D printers
Beyond regulatory considerations, the availability of 3D printers that have been validated for use in conjunction with cleared 3D modelling software remains limited, as demonstrated in Tables 1 and 2. Only a small subset of the available types of 3D printing techniques are represented in the list of validated printers, as well as an even smaller cohort of the thousands of brands and models of 3D printers on the market capable of producing 3D printed anatomical models are validated and marketed for use in producing anatomical models.Strategic partnerships between software providers and 3D printer manufacturers have motivated the validation of specific printers with software platforms [16,17,62], however in the absence of validation testing methods used by these providers and manufacturers in the public domain, the list of available printers may remain limited.The prevalence of expensive (> $100,000 USD) printing equipment, with disproportionately few low cost options with respect to the range available on the market is a limiting factor for the acceleration of 3D printing facility establishment in hospitals, despite low-cost models having similar clinical relevance than those produced on high-cost equipment [63][64][65].
Reimbursement & economics
Finally, a parallel challenge to accelerating the adoption of 3D printed anatomical models, in addition to regulatory and technological considerations, is the economical proposal.This has recently been the topic of an excellent editorial by Prof Frank Rybicki (University of Cincinnati) who examines the intersection of regulation and reimbursement in the current landscape of hospital-based manufacturing [57].In July 2019, the American Medical Association (AMA) defined four new Current Procedural Terminology (CPT ® ) codes relating to 3D printed anatomical models and surgical tools.CPT ® codes are a "uniform language for coding medical services and procedures to streamline reporting" [66], and the inclusion of specific codes relating to 3D printed models and guides presents and exciting step forward towards routine adoption and use of 3D printed models in healthcare settings.Specifically relating to 3D printed anatomical models, "codes 0559 T and 0560 T represent reimbursement for the production of individually prepared 3D printed models that can be made from one or more components and unique colors and materials" and can be used to bill for the production of these products during patient care [66].However, the codes are currently 'temporary' Category III codes and therefore health insurers are not obliged to reimburse for these codes, nor is a specific value assigned to the code for reimbursement.It is therefore at the discretion of individual health insurers whether they choose to reimburse for 3D printed anatomical models and if so, for how much.A survey of the over 300 US health insurers' [67] reimbursement schedules suggests that only 15 health insurers currently choose to reimburse for these specific CPT ® codes, to an average value of $91.78 US per model (n = 15) [68][69][70].The Veterans Health Administration reimburses the highest amount of the surveyed insurers, to a maximum of $372.78US [71].Coupled with their nationally leading network of on-site 3D printing facilities [72], including as a compliant medical device manufacturer [73], this poses an insightful estimate into the feasible cost of routinely produced 3D printed anatomical models based on the ability for the VHA to produce and bill for these models in-house.However, the comprehensive costs associated with producing anatomic models maybe substantially higher as demonstrated in a recent study where the average cost of producing anatomic models across 11 clinical indications at the point-of-care was $2180 and $2467 when outsourced to industry [74].
Ultimately, further research, validation testing methods and regulatory oversight will accelerate the availability of validated and cleared workflows for producing personalized surgical planning models for point-of-care manufacturing, propelling 3D printed anatomical models into routine clinical use.This article has sought to provide a consolidated summary of FDA-cleared software platforms specifically suited towards the generation of 3D printed anatomical models, as well as the 3D printing models currently validated for use with the FDA-cleared software.The sources of inaccuracy contributing to the risk profile of using non-cleared software and hardware combinations are also discussed, finally summarizing the currently accepted techniques for validating the entire scan-to-print pathway, alongside specific aspects of the manufacturing process to produce 3D printed anatomical models.This resource therefore seeks to enable further adoption of safe and effective point-of-case 3D printing for surgical planning models and expand their application towards routine adoption in healthcare settings globally.
• fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year
•
At BMC, research is always in progress.
Learn more biomedcentral.com/submissions
Ready to submit your research Ready to submit your research ?Choose BMC and benefit from: ? Choose BMC and benefit from:
Fig. 2 ✓
Fig. 2 Timeline of 510(k) clearance for medical imaging software for producing 3D printed anatomical models.The company name, software name and 510(k) number are provided on a timeline, as well as arrows indicating a software application's references to other software as a predicate or reference device in their 510(k) application
Table 1
List of 3D modelling with the intended use of producing 3D printed anatomical models, cleared by the FDA (Class II 510(k) pathway) | 6,568.8 | 2023-04-07T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Ultrasensitive mechanical detection of magnetic moment using a commercial disk drive write head
Sensitive detection of weak magnetic moments is an essential capability in many areas of nanoscale science and technology, including nanomagnetism, quantum readout of spins and nanoscale magnetic resonance imaging. Here we show that the write head of a commercial hard drive may enable significant advances in nanoscale spin detection. By approaching a sharp diamond tip to within 5 nm from a write pole and measuring the induced diamagnetic moment with a nanomechanical force transducer, we demonstrate a spin sensitivity of 0.032 μB Hz−1/2, equivalent to 21 proton magnetic moments. The high sensitivity is enabled in part by the pole's strong magnetic gradient of up to 28 × 106 T m−1 and in part by the absence of non-contact friction due to the extremely flat writer surface. In addition, we demonstrate quantitative imaging of the pole field with ∼10 nm spatial resolution. We foresee diverse applications for write heads in experimental condensed matter physics, especially in spintronics, ultrafast spin manipulation and mesoscopic physics.
M agnetic recording heads generate intense local magnetic field pulses to write bits of information onto a magnetic medium. Bits are encoded in the magnetization direction of magnetic domains separated by o20 nm in stateof-the-art devices. Write fields thus must be confined to a very narrow region in space, leading to extremely high local gradients. Although these gradients cannot be precisely measured, they are estimated to exceed 20 Â 10 6 T m À 1 (ref. 1). This is far beyond the capability of static magnetic tips (B5 Â 10 6 T m À 1 (refs 2,3)) and that of nanoscale coils or microstrips (B10 5 T m À 1 (refs 3,4)), and likely close to the highest experimentally achievable gradient for this kind of field source. In addition, write poles are rapidly switchable, potentially allowing for dynamical control of magnetic fields up to 1 GHz (refs 5,6) at low power consumption. With these features, the hard drive industry has created a tool that could enable important advances in many areas of nanoscale experimental physics.
A significant challenge in characterizing and exploiting writer fields is their nanometre spatial confinement. In this study, we introduce a variant of force microscopy-magnetic susceptibility force microscopy (wFM)-to localize and measure the write pole field quantitatively and with high spatial resolution. The technique relies on the small diamagnetic moment induced in a nominally non-magnetic diamond tip 7 that is positioned over the pole. We show that by using a state-of-the-art nanomechanical force transducer 8,9 , tip magnetic moments far below 1 m B can be detected, thus greatly advancing the sensitivity of mechanical spin detection [10][11][12] . Moreover, because of the diamond tip's welldefined material composition and extremely sharp end radius (o10 nm), quantitative and high-resolution field maps of the pole can be reconstructed. Our method thus provides advantages over magnetic force microscopy (MFM) [13][14][15][16][17] and electron holography 18 , which are difficult to quantify, barely reach sufficient resolution, or provide two-dimensional projections.
Results
Experimental set-up and measurement technique. The experimental geometry and the basic protocol for magnetic susceptibility force measurements are presented in Figs 1 and 2, respectively. The magnetic force is generated by passing an alternating current I through the write head's (WH) drive coil. The current dynamically magnetizes the write pole and a stray field B appears above the pole. A tip placed in the stray field acquires a small magnetization M ¼ wB/m 0 owing to its magnetic susceptibility w (m 0 ¼ 4p  10 À 7 N A À 2 is the permeability of free space and typically |w|o10 À 4 ). A weak force develops that attracts or repels the tip from the region of strongest field, depending on whether the tip is paramagnetic (w40) or diamagnetic (wo0). For a point-like particle located at position r, the force is where l ¼ VM ¼ VwB/m 0 is the magnetic moment and V the volume of the particle. For a transducer responsive to forces along the x direction, the measured force signal is given by F Since both the susceptibility w and the volume VB10 nm 3 are small, the expected magnetic moment is minute, on the order of 1 m B for a field of a few hundred mT. Even in a high gradient 410 6 T m À 1 , the diamagnetic force will therefore be o10 À 16 N. To distinguish this force signal from background fluctuations and spurious electrostatic driving, we modulate the drive current at half the cantilever resonance frequency (f c /2) while measuring the force generated at f c (Fig. 2). For small driving currents I, the write pole is not saturated and the field and gradient are both proportional to I. For a sinusoidal modulation I(t) ¼ I 0 sin(pf c t) with amplitude I 0 the resulting force has frequency components at d.c. and f c , where B 0 is the amplitude of the stray field. The force then drives mechanical oscillations x(t) of the cantilever that are detected by optical interferometry (see Methods). Note that as the current is increased, the write pole eventually becomes saturated resulting in a square-wave response to a sinusoidal drive. This leads to the appearance of higher order terms in the Fourier series.
Scanned measurements. In a first experiment, we demonstrate that magnetic susceptibility forces can indeed be measured. For this purpose, we positioned the diamond tip at z ¼ 20 nm over the pole surface and measured the cantilever signal as a function of drive current amplitude I, as shown in Fig. 3a. As expected from equation (1), the cantilever signal increased quadratically when increasing the driving current from zero to a few mA. To confirm that the observed signal is truly due to magnetic driving by the write pole, we have also varied the shape of the current modulation. When using a rectangular drive, no signal was observed, consistent with the absence of a Fourier component for this modulation pattern (Fig. 3b). As the pattern was continuously changed from rectangular to triangular, the cantilever signal first increased, reached a maximum at trapezoidal drive, and then decreased again (see Figure insets).
To record a two-dimensional image, we performed a raster scan over the pole and plotted the cantilever oscillation amplitude as a function of xy tip position (Fig. 4a). The scan clearly shows two regions of high signal, which we identify as the front and back ends of the write pole. As expected, the largest force is generated near the trailing gap where the bending of field lines is highest. A finite element simulation of the expected force signal (Fig. 4b), based exclusively on the micrograph of Fig. 1d, agrees surprisingly well with the scanning experiment (see Supplementary Fig. 1 for details). The signal peaks are localized to within B10 nm, demonstrating the high spatial resolution of the magnetic susceptibility force imaging technique.
By measuring the phase shift between the cantilever oscillation and the drive current, information about the sign of w can be extracted in addition to the magnitude of the magnetic force. Figure 4c,d shows two-dimensional force images obtained by scanning two types of tips. Tip A was coated with a 10 nm Pt layer and was moderately paramagnetic (w Pt ¼ þ 2.9 Â 10 À 4 ) (ref. 19). Tip B was a bare diamond nanowire and weakly diamagnetic (w diamond ¼ À 2.2 Â 10 À 5 ) (ref. 20). The two figures clearly show how the paramagnetic tip is attracted to the centre of the write pole (where the magnetic field is highest), while the diamagnetic tip is repelled from the high field region. These measurements demonstrate that the technique is sensitive to the sign of w.
Reconstruction of magnetic field and gradient. Although our measurements record images of the magnetic force F x , and not the field B, we can rigorously reconstruct the magnetic field and the gradient from a force map. By integrating equation (1) along the x direction, one obtains an expression for the magnetostatic energy of the tip as a function of position, This expression can be used to deduce the magnetic field, Maximum achievable field and gradient. To estimate the maximum field and gradient that can possibly be generated at the tip location, we measured the dipole force as a function of tip-surface distance z and as a function of the drive current I. These measurements are also presented in Fig. 5. As the distance between tip and pole surface was reduced, the signal increased until non-contact friction 7 began to dampen the mechanical oscillation. The signal decayed exponentially with distance with characteristic length d ¼ 11.3±0.6 nm for tip B (Fig. 5c) and d ¼ 13.5 ± 1.2 nm for tip A ( Supplementary Fig. 5). The signal decay is thus not influenced by the tip shape. We found using numerical modelling that d is mainly set by the width of the write pole near the trailing gap, which was about 60 nm in our devices.
To test the maximum drive current, we have monitored the signal from tip B while increasing the drive beyond the failure current density (Fig. 5d). The signal continued to increase up to a breakdown current of B30 mA, showing that the write pole had not reached magnetic saturation. We have modelled the magnetic response based on a Langevin magnetization curve with a saturation field of 2.4 T (FeCo) 21 and found that the pole magnetization at 30 mA is about 1.57 ± 0.08 T, or roughly 65% of the saturation magnetization (Supplementary Note 3). Note that the power dissipation is small even at the maximum driving amplitude. For a WH resistance of R ¼ 3 O that should ideally be achievable, the dissipated power is only 14 mW even when the coil is continuously driven with 30 mA. In our experiments, the total dissipated power was in fact limited by lead and contact resistances, and not the write element. In pulsed mode and at low duty cycle, we expect that saturation can be reached with o1 mW average dissipation, eventually permitting Millikelvin operation. Fig. 4b and on the scaling of the force with drive current I and distance z. The experiment demonstrates that magnetic fields around 0.87 T and gradients around 28 MT m À 1 are present at 30 mA drive, about 5 Â larger than the gradients observed with static nanoscale ferromagnets 2 . Even larger gradients are expected under magnetic saturation, which could be reached by pulsing or by an external bias field. The experimental results are consistent with an independent finite element calculation (that shared no common input parameters) that predicts slightly smaller, but overall similar values ( Table 1). The large gradient is a result of the close access (B5 nm) enabled by the diamond nanowire tips, and also thanks to the recessed nature of the pole that prevents oxidation and allows for sharp, highly magnetized pole edges.
Discussion
WH gradient sources could therefore enable important steps forward in mechanical detection of electronic and nuclear spins. In Fig. 5c, forces up to F ¼ 568 aN are generated with a net magnetic moment of only l ¼ wVB/m 0 B3.3 Â 10 À 23 Am, roughly equivalent to B3.6 m B . This represents a strong force per electron spin of about B160 aN per m B . Since the force sensitivity of the transducer of F min B5 aN Hz À 1/2 is maintained even at 5 nm spacing (Fig. 5e), an excellent magnetic moment sensitivity of 0.032 m B Hz À 1/2 results, which is equivalent to about 21 proton moments per root Hz. This value improves to B12 proton moments if the pole is magnetically saturated. By comparison, the best reported sensitivities for previous force detectors are B100 proton moments 10,22 , and B250 proton moments for scanning SQUID sensors 23 .
The sensitivity is even more remarkable since the performance of the force transducer was not particularly optimized in our study. It could be further enhanced by surface passivation 9,24 , change of material [25][26][27] or lower operating temperatures 26,28 . It is therefore conceivable that WH gradients will pave the way for single nuclear spin detection. This will constitute a milestone advance towards the realization of the long-standing proposal of single nucleon magnetic resonance imaging 29 . The depth resolution of such imaging would be limited by the decay length of the pole field, which is 2dB23 nm for the WHs used in our study. This is a significant depth resolution compared with other candidate imaging techniques, like nitrogen-vacancy spin sensors, where single-spin sensitivity does not extend beyond a few nm 30,31 .
Aside from the detection of magnetic forces, the use of WHs may allow for significant advances in several fields of active research. Perhaps the most important of these is the manipulation of spin systems in the context of quantum technologies 32 . Static magnetic field gradients can be used for rapid spin manipulation by means of electrical fields 33,34 or quantum dot exchange interaction 35,36 , and for strong and tunable coupling in hybrid quantum systems 37 . The gradient can further be exploited to set up spin registers in quantum simulators 38 . The combination of large field and rapid switching 5,6 , which is difficult to achieve in typical research devices, will allow the implementation of very fast spin manipulation through magnetic resonance techniques 39 Universal spin control requires two orthogonal axes of rotation. Although the WH only provides one axis given by the direction of B(r), there are several possible ways to add a second axis, as proposed in Fig. 6. Beyond spin control, WH devices may finally present creative new opportunities in nanoscale transport experiments. For example, pulsed spin polarized currents could be launched through local magnetization of ferromagnetic electrodes 43 . Electrons in confined geometries, such as quantum point contacts, could be locally deflected and the magnetic potential varied within the ballistic regime. Since WHs have an extremely flat surface made from nonconducting diamond-like carbon, complex lithographic structures including local gates, microwires, constrictions, quantum dots or spin qubits could be conveniently integrated.
Methods
Write heads. The hard disc WHs used in this study were extracted from Seagate Barracuda 1 and 4 TB desktop drives. As a general rule, we worked on anti-static mats and grounded ourselves while working with the WH to avoid electrostatic discharges. Care was taken to avoid physical contact with the trailing edge of the WH where the write/read regions are located.
We opened the hard drive casing to gain access to the platter discs and the actuator arms. The WH blocks are located at the end of these arms. We severed the ends of the actuator arms using a pair of scissors, then peeled the WH block from its metal support with a pair of sharp tweezers. The backside of a WH block has a small circular dab of glue to fix it to the actuator arm. The glue turned out to be problematic for precise, flat positioning of the WH in our experiment. To remove this dab of dried glue, we flipped the WH block upside down on a clean silicon wafer and scratched the backside of the WH block with a sharp pair of tweezers wrapped in a couple layers of Kimwipe until the back surface was completely clean and residue-free as determined by optical microscopy. A B2 nm Pt layer was evaporated on the WH surface to screen electric charges.
For low-temperature scanning probe experiments, we fixed the WH on top of a sample stage made from shapal, an aluminium nitride ceramics with excellent thermal conductivity (Supplementary Fig. 6). We avoided metal as material for the stage to prevent eddy-current damping of radio-frequency A total of nine metal bonding pads were found on the side of the WH block. Two of the nine pads were connected to the current-carrying coil inside the WH block for controlling write pole magnetization direction. We identified the two relevant coil leads through trial and error by using magnetic force microscopy for readout verification (Supplementary Fig. 7). For electrical contact to macroscopic wires, we used a silicon jumper chip with lithographically patterned gold thin film structure matching the pad dimensions.
The mechanical and electric reassembly process took place in the following sequence. We first glued the jumper chip to the shapal stage using an insulating glue compatible with high vacuum and cryogenic temperatures (Epotek H70E). We glued copper wires to the two metal pads on the jumper chip using a conducting epoxy (Epotek H20E) and mechanically fixed them with an additional layer of Epotek H70E. Next, the WH block was carefully positioned such that the two leads for the write pole were aligned with the corresponding leads on the jumper chip, and then fixed using Epotek H70E. Finally, electrical contact between the WH and the jumper chip was established using Epotek H20E with the aid of a hydraulic micromanipulator system (Narishige Three-axis Hanging Joystick Oil Hydraulic Micromanipulator, model: MMO-202ND) operated under an optical microscope. Finally, we used an Asylum Research Cypher MFM to confirm successful electrical control of the write pole magnetization ( Supplementary Figs 7 and 8).
Diamond nanowire tips. Single-crystal diamond nanowire tips were fabricated via inductively coupled plasma etching following procedures detailed in a previous publication 7 . The nanowires were transferred from their mother substrate to an intermediate silicon wafer chip via PDMS stamping (Gel-Pak 4 padding material). Two tips were prepared for this study; tip A consisted of a diamond nanowire with an apex diameter of B40 nm coated by 15 nm of YF 3 (w ¼ À 1.0 Â 10 À 6 (ref. 44)) and 10 nm of Pt (w ¼ þ 2.6 Â 10 À 4 (ref. 19)). These layers were deposited onto a silicon chip used for tip A via ebeam evaporation. The YF 3 was deposited with future 19 F NMR experiments in mind and had no role in this study, and the Pt was deposited to turn the nanowire paramagnetic as well as to screen electric charges. Tip B was a bare diamond nanowire (w ¼ À 2.2 Â 10 À 5 (ref. 20)) with an apex diameter of B18 nm. High-resolution scanning electron micrographs of both tips are shown in Supplementary Figs 2 cantilevers had a length of 120 mm, a shaft width of 4 mm and a thickness of 120 nm (ref. 8). The spring constant was k c ¼ 90 mN m À 1 , the resonance frequencies were between f c ¼ 5 and 6 kHz, and the quality factors were around Q ¼ 30,000 at 4 K, resulting in a nominal force sensitivity of about 4 aN Hz À 1/2 . See ref. 7 for fine details of the nanowire handling and transfer processes.
Sample positioning. Due to the extremely smooth, topographically featureless surface of the WH air-bearing surface, locating the write pole by the scanning cantilever was a challenge. We tested a number of methods and found focused-ion beam deposition of platinum dot markers to be the most reliable and efficient one. Dots about 200 nm in diameter and 70 nm in height were written close to the write pole. These dot features were easily identified when performing scans of the cantilever at constant height over the surface using the cantilever resonance frequency as image contrast ( Supplementary Fig. 9).
Force signal acquisition. All measurements discussed in the main text were performed in a cryostat operated at 4 K and in high vacuum (o1 Â 10 6 mbar). The WH driving current at f c /2 was generated by an arbitrary waveform generator (NI PXIe-5451). TTL pulses were used to synchronize the lock-in amplifier (Stanford Research SR830) to which the cantilever motional signal from the optical interferometer was sent. A negative feedback loop was used to damp the cantilever to an effective operating Q of about 300 to shorten the sensor response time and to keep the driven cantilever oscillation below B1 nm. The force was calibrated by measuring the thermomechanical noise of the cantilever at 4 K, and comparing the oscillation amplitude of the magnetically driven cantilever with the r.m.s. amplitude of the thermal motion. We have only calibrated the force for tip B, but not for tip A. The phase of the force, which is available as a lock-in output, was calibrated by minimizing the imaginary part of the signal.
Finite element modelling. We carried out finite element simulations with COMSOL to validate the experimental results. The geometry of the write pole and the surrounding return shield were extracted from Fig. 1d (Supplementary Fig. 1a,b). The return shield thickness was estimated to be B200 nm based on a focused ion beam cut into the WH surface. We assumed FeCo (with fixed magnetization M pole ) as material for the write pole and NiFe (m r ¼ 2 Â 10 3 ) for the return shield 1,45 . No parameters were adapted to fit the experimental results. The simulation uses a write pole magnetization of m 0 M pole ¼ 1 T and the numerical values reported in Table 1 were obtained by a scaling with the correct pole magnetization. Supplementary Fig. 1c shows a profile cut through the write pole and return shield at y ¼ 0.
Data availability. The data that support the findings of this study are available from the corresponding author upon request. | 5,020.4 | 2015-12-10T00:00:00.000 | [
"Physics"
] |
Some Insight into the Generalized Linear Least Squares Parameter Adjustment Methodology
Some features of the generalized linear least squares parameter adjustment procedure have been discussed and proved. In particular: the equivalence of the adjusted measured response values and their recalculated values with the adjusted parameters, the proper way to iterate when the responses are not a linear function of the parameters and the equivalence of the simultaneous adjustment of the parameters with two not correlated measured responses and the consecutive adjustment first with one response and then with the second response.
Introduction
The generalized linear least squares adjustment methodology has been used for almost fifty years in nuclear data evaluation and testing, in reactor dosimetry as well as in criticality safety analysis [1][2][3][4].Many novel users of the methodology either lack the knowledge of some of the properties of the methodology or try to reinvent some of its properties.In this paper we are going to elaborate on and document some of the less known properties of the methodology.
The Adjustment Formulas
The outcome of a generalized linear least squares adjustment can be presented by the following equations: where p denotes a column vector of N parameters to be adjusted by a series of I measured responses presented by the column vector r.Any quantity with a prime symbol indicates the respective adjusted, i.e. posterior, quantity.The corresponding calculated response values using the parameters are given by the column vector r (p).Since usually the calculated response values differ from their respective measured values, we define their difference vector by d = r (p) − r.The partial derivatives of the calculated responses with respect to the various parameters, the sensitivities, are presented by the I*N EPJ Web of Conferences dimensional matrix S, and its transpose, N*I sensitivity matrix, is denoted by a dagger.The linearity is expressed by r p − r (p) = S × p − p , which is equivalent to The respective uncertainties are presented by the square variance-covariance matrices C p , C r and by the I*N response-parameter covariance matrix C rp .The uncertainty in the difference vector, stemming from the respective uncertainties in the parameters and in the measured response, is given by Usually the measured responses are not correlated with the parameters, however even if there is no prior response-parameter correlation the application of the generalized linear least squares adjustment results in such correlations.The posterior uncertainties in the responses, in the parameters and the resulting response-parameter correlations are given by: Equations (1-7) were derived by minimizing the quadratic loss function r − r, p − p † C −1 r − r, p − p , where the covariance matrix C is C r C rp C pr C p , subject to the linearity constraint, Eq. ( 3), using Lagrange multipliers, in [5,6], and in a more elegant way in [7].This method has important advantages over the alternative method of using the linearity, Eq. ( 3), explicitly in the loss function and minimizing it in a straightforward way.The two methods give identical results.
For the sake of simplicity, let us rewrite Eq. (1) for the case of vanishing response-parameter correlations.The parameters adjustment is p − p = −C p S † C −1 d d and the alternative derivation [5], without using Lagrange multipliers, results in , by multiplying both sides of the equation on the left by C −1 p + S † C −1 r S and on the right by C d which is C r + S † C p S .The straightforward minimization of the loss function necessitates the inversion of two large matrices and Eqs.((1-2), (5-7)) require only the inversion of C d , which has the dimension of the number of responses that is usually much smaller than the number of parameters.
Adjusted Responses and Re-calculated Parameters and Responses
The calculated response vector using the adjusted parameters and the linearity property is: This means that, if the linearity assumption is correct, the adjusted response vector and the corresponding response vector recalculated with the adjusted parameters are just the same, thus, r p = r .Although there is an uncertainty in r p due to the propagated uncertainties in the adjusted parameters p and there is an uncertainty in the adjusted response r , there is no uncertainty in their difference which is always equal to zero.However, if the calculated response is not a linear function of the parameters then the real recalculated value, using the adjusted parameters, is not equal to the adjusted response r .
Adjustment Iterations
In practice, since the linearity is only an approximation, the explicitly recalculated response using the adjusted parameters may not coincide with the adjusted response value and thus d does not vanish.
It is tempting to try to re-adjust (adjust again) the new parameters p and the new responses r using their new respective uncertainties C p and C r .According to the adjustment formulas the re-adjusted response vector r is: which means that if we do not recalculate the sensitivities at p , which is time consuming, and use in the re-adjustment the original sensitivities, that have been calculated at p, the re-adjusted response values just do not change although d does not vanish.Similarly, since C p r − C p S † = 0 the re-adjusted parameters also do not change, and since these two coefficients are equal to zero the adjusted covariance matrices also do not change.The sensitivities have to be recalculated after each iteration.
Reduced Uncertainties
The common notion is that the need to adjust parameters, e.g.neutron cross sections, arises when there is a discrepancy between the calculated response values and their corresponding measured values, e.g.dosimeter reaction rates.However, it should be remembered that even when there is full agreement between calculated and measured response values, i.e. d = 0, and the parameters and response values do not change, the prior uncertainties in the parameters as well as in the measured response values are reduced.This reduction reflects the fact that there is no prior discrepancy.Even if there is a prior discrepancy, one important result of an adjustment campaign is that the posterior response values and the posterior parameters have a finite reduced uncertainty.The posterior response uncertainty given by the covariance matrix, is the difference between two covariance matrices.The first matrix is the prior covariance matrix of the measured responses and the second matrix is the covariance matrix of the response adjustment, r − r.Since all covariance matrices are positive definite, the uncertainty is reduced.Similarly, the posterior uncertainty in the parameters, is also the difference of two covariance matrices, the second being the covariance matrix of the parameters adjustment, p − p.
Consecutive Adjustments
A frequently asked question, particularly in the reactor physics community and in the cross section evaluators community, is whether an adjusted parameter library can be further modified by an adjustment with a new measured response or whether one has to go back to the original parameter library and perform a new adjustment campaign with all responses.In this section we will demonstrate this question by the simultaneous adjustment with two non correlated measured responses and then by the consecutive adjustment of the parameter library by the two responses.
EPJ Web of Conferences
The 2*N sensitivity matrix, S, has two rows.The first row, S 1 , the sensitivity of the first response to all the parameters and the second row, S 2 , the sensitivity of the second response to all the parameters.All notations will have an index denoting their relationship to response number one or response number two.
Simultaneous Adjustment
The joint uncertainty matrix of the deviations of the two calculated response values from their corresponding uncorrelated measured response values is given by and its inverse is given by The simultaneous adjustment in the response values is given by r − r = C r C −1 d d.In particular, the adjustment of the first response is given by and the adjustment of the second response is The simultaneous adjustment in the parameter values is given by All adjusted quantities, the parameters as well as the two previously non correlated measured responses carry now information from each of the adjusted responses.
Adjustment by the First Response
The adjustment of the parameters by the first response results in the adjustment of the first response by The symbol p 1 indicates that these are adjusted parameters (the prime indicating adjustment) and that the adjustment was performed with the first response (sub 1).The uncertainty in these parameters which have to be used in the further adjustment by response number two is All adjusted items are the result of the applications of the adjustment formulas given in Sect. 2.
Consecutive Adjustment by the Second Response
We proceed with the adjustment of the parameters with response number two.Response number two is now calculated not with the original parameters p but rather with the adjusted parameters that have already been adjusted by response number one.The resulting deviation of its calculated value from its measured value is r2 p 1 − r 2 and the inverse of the uncertainty of this deviation, needed for the next adjustment is The adjustment in response two by response two, calculated with parameters that have been previously adjusted, is Note that the deviation of the calculated response from its measured value, used in the second adjustment is not d 2 , since the response was calculated with the adjusted parameters p 1 and not with the original parameters p.We use the linearity in calculating the shift of the response value resulting from the shift of the parameters due to the first adjustment.This deviation, can be reformatted as follows The adjustment in response two is The denominator can be recognized as det C d .Thus, the total adjustment of response two is the same as its simultaneous adjustment with response one.
The Adjusted Parameters
The original parameters have been adjusted twice, first by response one and then the adjusted parameters have been adjusted once more by response two.Let us denote these parameters as p 1,2 , the double prime indicating two adjustments and 1,2 indicating that the adjustments have been first by response one and 07003-p.5 EPJ Web of Conferences then by response two.
The total parameter adjustment is the sum of the first adjustment by response one, starting with the original parameters p, and presented in 3.2, followed by the second adjustment by response two starting with p 1 , derived in Sect.3.3.The second adjustment is: Plugging in the various items that have been derived in earlier sections and manipulating the expression in a similar way we end up with a compact expression for p The consecutively adjusted parameters are equal to the adjusted parameters resulting from a simultaneous adjustment of the parameters and the two measured responses, as derived in 3.1.
The Measured Response one After the Second Adjustment
Response one has been adjusted by itself and is equal to its calculated value with the corresponding adjusted parameters.The following adjustment of these parameters, by the second response, necessitates the calculation of the value of response one once again, this time by the twice adjusted parameters.The final calculated value of response one is the sum of its former adjusted value by itself, and using the linearity, the shift of the calculated response due to the shift of the parameters from p 1 to p 1,2 .r1 p 1,2 − r 1 = r p 1,2 − r1 p 1 + r1 p 1 − r 1 After inserting the parameter shift into the last equation and manipulating the terms once again we get which is equal to the result of the simultaneous adjustment.
We have shown that the simultaneous adjustment of a set of parameters by two uncorrelated responses is equivalent to adjusting the parameters by one of the responses and modifying these adjusted parameters by a second adjustment by the second measured response.
Summary
After briefly introducing the motivation for writing this paper and the introduction of the generalized linear least squares adjustment symbols and equations we pointed out a few less known and less documented properties of the methodology.We showed that the equality of adjusted responses and their corresponding calculated values with the adjusted parameters is a direct consequence of the linearity assumption in the linear least squares adjustment methodology.We have shown that unless one | 2,881.2 | 2016-01-01T00:00:00.000 | [
"Mathematics"
] |
THE CZECH REPUBLIC SUGAR MARKET DEVELOPMENT IN THE CONTEXT OF THE PHASING OUT OF SUGAR QUOTA
The aim of the paper is to assess the current position and situation of the Czech sugar market actors. The new situation is expected due to the declared phasing of out of sugar quota with the EU Common Agriculture Policy. The analysis is based on secondary date from the statistics of the Czech Statistical Office and the Ministry of Agriculture of the Czech Republic. Czech Republic has a long strong tradition in sugar production. It lost a leading position in the sugar market in the past. The sugar industry has been affected by various factors during several periods. The last big challenge for the market was the restricting system of the Common Agriculture Policy of the European Union. The expected development of the sugar market in the no-quota environment could be a good opportunity for both Czech sugar beet producers and sugar beet manufacturers (sugar producers).
INTRODUCTION
Quota limits on sugar production, defined at EU Member State level and further allocated over processing factories and individual sugar beet growers, have been in place since 1960s.
The European agricultural market has been criticized for its heavy regulations and subsidization. The sugar market is one of the most regulated ones (Benešová, Řezbová, Smutka, & Laputková, 2015). One result of this policy was that the EU had been both the second largest importer and second largest exporter in the world market (Moyo, & Spreen, 2011). On the other hand, the impact of the EU Common Agriculture Policy (CAP) on the food prices has gradually weakened (Swinnen, Knops, & van Herck, 2015). It relates with several reasons, next to the other factors of world agriculture and world market development, we have also to admit that the CAP changed their priorities and concepts. The most important seems to be a change in the direct payment character as to their separation from production ("decoupling"), so that such payments would be conditional on the fulfillment of many standards relating to environmental protection, animal welfare, food safety and food quality (Bečvářová, 2011).
The Czech sugar industry was influenced by a dramatic transition process, which was accompanied by many changes as for example: Many sugar factories reduced their production, or closed down completely; there were changes of ownership and increased role of foreign (often unpredictable) investors, modifications of sugar distribution nets and technological innovations (Krejčí, Havlíček, Klusáček, and Martinát, 2014).
In 2006, a reform of the Common Agricultural Policy (CAP) sugar regime brought a simplification of the quota structure, and incentives were offered to Member States that opted to reduce -or renounce altogether -their national quota limits. Quotas were prolonged until 2014/15, with no commitment to further renewal. Actually, they were finally prolonged till the end of 2016/2017 to be definitely abolished by 30th September 2017. The basic aim of Sugar reforms, which was submitted to the EU Commission and has been approved by the Council of Ministers in November 2005, was to minimize the price and production of sugar, mainly at the expense of the least competitive growers and sugar areas, increase the competitiveness and to make the European market more accessible for developing countries (Krouský, 2008).
In general terms, the EU CAP sugar regime led to an oligopolistic market situation in the European Union. Current EU sugar production is concentrated in five countries (France, Germany, Poland, Great Britain and Netherlands. Some countries discontinued sugar production (Portugal, Ireland, Latvia, Bulgaria and Slovenia). Other countries (including the Czech Republic) reduced the production (Baudisová, 2017). Majority of sugar quotas are controlled by companies headquartered in Germany, France, the Netherlands and the United Kingdom. In nowadays -the sugar quota system in the European Union is operated/controlled by only a few very powerful operators: Südzucker, Nordzucker, Tereos, ABF, Pfeifer & Langen, Royal Cosun and Cristal Union (Řezbová, Maitah, & Sergienko, 2015). Sugar in the EU has been one of the most heavily subsidized sectors (Gotor, & Tsigas, 2011).
The basic change in the sugar market from 1st October 2017 is then the end of production quotas, which was for the Czech Republic 372,459.207 t of sugar -divided among five sugar companies. The 1st October means the beginning of the period of unregulated sugar production in the EU as well as of the export of sugar to third countries. Sugar factories will no longer pay production batch of sugar (EUR 12/t) and fixed minimum sugar beet price (EUR 26.29/t) ceases to apply, by means of which the grower has so far been "protected".
Also the over-quota sugar mode ends. This system set out how to deal with the "excess" sugar. With regard to the export of sugar to third countries, quantitative restrictions on exports of the over-quota sugar ceases to apply. Other international agreements will not change and will continue to be in force.
System of the market order through production quotas has ensured a balance of supply and demand, so that there were no stronger price fluctuations. For many reasons a decrease of sugar prices should be expected (Burrell, Himics, Van Doorslaer, Ciaian, & Shrestha, 2014). .
The reasons for the decline in sugar prices are, however, more numerous. In addition to the quota abolishing, there are excellent sugar beet yields in the zone of Central Europe and the high increase in some areas of growing countries. The price is, of course influenced also by the world price for sugar (Smutka, Rumánková, Pulkrábek, & Benešová, 2013). Further development of the EU sugar market depends also on behaviour of sugar producers from out of the European Union who have made imports thanks to the preferential access to the EU market (Meyer et al., 2016).
Another question is the link of growing sugar beet and the topical problems of sustainable development. Sustainability concerns have a fundamental economic aspect regarding competitiveness with cane sugar (Řezbová, Maitah, & Sergienko, 2015) and an environmental aspect including mainly the current issue of emissions and foreign chemical substances (Chochola, Pulkrábek, 2012).
The aim of the paper is to assess the current position and situation of the Czech sugar market actors.
METHODS
Secondary data were collected from the statistics of the Czech Statistical Office and the Ministry of Agriculture of the Czech Republic. Data were projected by means of Microsoft Excel.
In order to project the development trend, the polynomial function was imployed, as follows: where a n , a n-1 , …, a 2 , a 1 , a 0 ∈ R , a n ≠ 0, are real cofficients a n = coefficient of degree n (for the highest square number), a k = coefficient of degree k, a 1 = coefficient of linear term, a 1 x linear term, a 0 = absolute term.
RESULTS
The total volume of the processed sugar beet has increased in long term. After a several-years decrease in the early period of applying the Common Market Organization (CMO) in the Czech Republic (since 2000), the sugar beet production and procession regained and later even surpassed their previous quantities. The biggest quantity jump has come in 2011/2012, when the year-on-year increase of the quantity was 656,790 tons of the processed sugar beet, i.e 22.5%. Not only in case of the sugar beet production but even the sugar production achieved the best results in previous history of the Czech Republic (Froněk, Trnková, & Hanák, 2012). The overall increasing trend since 2008 was confirmed by the polynomial regression (see Fig. 1). The value of the regression coefficient (coefficient of determination) is high and the test was significant. processed sugar beet corresponds with the statistics on the yield (quantity of harvested tubers, Tab. 1) and the harvested area (Tab. 2). Campaign 2004/20052005/20062006/20072007/20082008/20092009/20102010/2011 Production of sugar constantly overpasses the needs of the country. Czech Republic is selfsufficient in this commodity and export sugar to other countries (as implied from the Tab. 1, 2 and 3). Table 3 Country's consumption:
Table 1 Yield of tubers
Year 2004/20052005/20062006/20072007/20082008/20092009/20102010/2011 (1,000 t) Thanks to the fact that countries such as Hungary, Slovakia, Romania or Ukraine will be very probably in sugar production deficit even in future, Czech producers could look out for a very interesting market in the region of Central and Eastern Europe with up to 70 million potential customers. The undoubted advantage of the Czech Republic is also a greater distance from the sea that protects it from imports of white sugar and as well as from imports of raw sugar for further refining. The latter imports, indeed, are likely to be economically disadvantageous due to the proximity of raw and white sugar prices overall (Reinbergr 2017). 16.48 16.64 16.72 17.86 18.23 18.70 19.32 19.42 20.34 2015 19.46 20.12 19.36 19.49 19.09 17.45 17.48 17.62 16.65 16.04 15.17 15.64 2014 24.27 24.02 23.90 23.09 23.29 22.63 21.72 21.62 20.97 19.67 19.72 19.40 Source of data: Czech statistical office https://vdb.czso.cz/vdbvo2/faces/cs/index.jsf?page=statistiky&katalog=31779 Source of data: Czech statistical office -https://www.czso.cz/csu/czso/setreni-prumernych-cen-vybranychvyrobku-potravinarske-vyrobky-casove-rady
DISCUSSION
The growth (or regrowth) of the harvest area in the Czech Republic since 2009, hand in hand with the improving yield could promise a good potential of Czech producers for exporting sugar (seeing the fact the consumption of the domestic market is surpassed by the production. A question remained, what will be the evolution of the market after the EU quota abolition. If we would look for a parallel, we can see the situation after the recent milk quota abolition, even though the particular factors should not be absolutely the same (Kovářová and Procházková, 2017). There was a slight price reduction as expected by OECD and FAO (2014). However, the effect of the no-quota environment was not very visible in the context of influence of other factors of the dairy market (Salou et al., 2017).
A very few studies (compared to the studies preceding the phasing out of dairy quotas) have been devoted to the scenarios of the sugar market development after phasing out of sugar quotas. OECD-FAO Agricultural Outlook 2014 has predicted the possible development if both situations -the quotas will continue and the quota will be removed. In case of the sugar quota abolishing the sugar beet production is expected to rise as well as the production of isoglucose. The processing of the sugar beet for sugar is expected to increase at the expense of the ethanol production. The price could slightly decrease, but the world price would remain strongly volatile. A lower price of sugar could lead producers using sweeteners to switch from other sweeteners to sugar. Lower price of sugar could also lead to the decrease of sugar import to the EU, as it would be no more cost-effective for cane sugar producers.
CONCLUSION
In the no-quota environment, sugar beet producers get a good opportunity to raise their production. Sugar beet manufacturers will be to consider their focus on sugar and/or ethanol production. Sugar production still seems to be more profitable than ethanol, so that with no limits to sugar production and a potential to market it the choice for sugar production could be expected. However, Czech sugar beet manufacturers have recently invested into the ethanol production technology. Namely in the periods of lower prices of sugar, there is a good potential to export sugar to our European countries who are not self-sufficient in the sugar production. | 2,625 | 2017-07-31T00:00:00.000 | [
"Economics",
"Agricultural and Food Sciences",
"Business"
] |
Effectiveness of Nateglinide on In Vitro Insulin Secretion from Rat Pancreatic Islets Desensitized to Sulfonylureas
Chronic exposure of pancreatic islets to sulfonylureas (SUs) is known to impair the ability of islets to respond to subsequent acute stimulation by SUs or glucose. Nateglinide (NAT) is a novel insulinotropic agent with a primarily site of action at β-cell KATP channels, which is common to the structurally diverse drugs like repaglinide (REP) and the SUs. Earlier studies on the kinetics, glucosedependence and sensitivity to metabolic inhibitors of the interaction between NAT and KATP channels suggested a distinct signaling pathways with NAT compared to REP, glyburide (GLY) or glimepiride (GLI). To obtain further evidence for this concept, the present study compared the insulin secretion in vitro from rat islets stimulated acutely by NAT, GLY, GLI or REP at equipotent concentrations during 1-hr static incubation following overnight treatment with GLY or tolbutamide (TOL). The islets fully retained the responsiveness to NAT stimulation after prolonged pretreatment with both SUs, while their acute response to REP, GLY, and GLI was markedly attenuated, confirming the desensitization of islets. The insulinotropic efficacy of NAT in islets desensitized to SUs may result from a distinct receptor/effector mechanism, which contributes to the unique pharmacological profile of NAT.
INTRODUCTION
Sulfonylureas (SUs) are widely used in the treatment of non-insulin-dependent (Type 2) diabetes. They stimulate insulin secretion by closing ]-cell plasma membrane KAT p channels, [1] which are formed by the molecular interaction between a high-affinity SU receptor, SUR1, and an inwardly rectifying K + channel subunit, Kir6.2. [2] The closure of KATP channels leads to the opening of voltage-dependent Ca 2 + channels, Ca 2 + influx and stimulation of Ca 2 +dependent exocytosis. [3] While SUs exert hypoglycemic action via a direct stimulation of insulin release during short term administration in type 2 diabetic patients, their activity declines during long term applica-tion, which has been suggested to be directly attributable to a desensitization of /-cells to SUs. [4][5][6][7] In vitro chronic exposure of pancreatic islets to SUs is also known to cause impairment of secretory response to subsequent stimulation by glucose or SUs. [8][9][10][11] The mechanisms underlying *Address for correspondence: Metabolic Nateglinide (NAT), a novel D-phenylalanine derivative, shares the mechanism of action with SUs to act on KATP channels in pancreatic ]B-cells. This drug, compared to other marketed KATP channel-blocking hypoglycemic agents, has demonstrated unique characteristics including a rapid onset, short duration of action, sensitivity to ambient glucose, and resistance to metabolic inhibition, suggesting some aspects of the signaling pathway(s) mediating NAT's action are novel and distinct from those mediating the effects of SUs and REP. This study was designed to evaluate the ability of NAT and the com-parators, glyburide (GLY), glimepiride (GLI), and repaglinide (REP) to stimulate insulin secretion from isolated rat islets undergoing prolonged pre-exposure to SUs, GLY or tolbutamide (TOL). Using static incubation method, normal rat islets were incubated overnight (---18hours) in the absence and presence of therapeutically relevant concentrations of GLY or TOL before their acute responsiveness to NAT, GLY, REP, and GLI was determined. Our results showed that lasting treatment of GLY or TOL markedly attenuated the islets' acute response to GLY, GLI and REP, while NAT retained ability to stimulate insulin secretion from SU-desensitized islets. These findings are suggestive of common signaling pathways for the action of SUs and REP. On the other hand, the effectiveness of NAT in SU-desensitized islets could imply the involvement of an action site and/or a receptor/ effector pathway different from those of SUs and REP. The unique "desensitization-resistant" properties of NAT may explain some of the pharmacological differences between those of NAT and those of SUs and REP. [12,13] MATERIALS AND METHODS
Islet Isolation
Pancreas were dissected from normal fed male Sprague Dawley rats (250-275g), which were euthanized with Na pentobarbital i.p. at 120mg/kg. Islets of Langerhans were isolated by librase digestion (0.5mg/ml, Boehringer Mannheim, Germany) followed by a Ficoll gradient centrifugation. [14] Static Incubation of Islets Desensitized to SUs Freshly isolated islets were handpicked under a stereomicroscope by gentle suction through a large firepolished pipette (--4001xm diameter) into 60 15 mm Petri dishes (Corning) containing 25ml of DMEM (Dulbecco's Modified Eagle Medium, Gibco BRL) supplemented with 10% fetal calf serum, 5mM glucose (G5-DMEM) and 1% BSA (BSA was present in all incubation media throughout the experiments). Islets were preincubated in a humidified atmosphere of 95% 0 2 and 5% CO 2 at 37C for I hour.
The islets, after 1 hour preincubation, were selected in batches of two and placed into borosilicate glass tubes (1275mm) containing 500txl G8-DMEM in the presence or absence of 100nM or 10txM GLY or 100txM TOL, and incubated over night (--18hours) in a humidified atmosphere of 95% 02/5% CO 2 at 37C. Acute experiments were performed the following morning to determine the ability of islets to respond to NAT and other insulinotropic agents. 500txl 2x treatment of the concentrations of test hypoglycemic drugs was added to the tubes (to form 1 ml final volume) containing either GLY/TOL-treated or GLY/TOL-untreated islets. The concentrations of test hypoglycemic drugs were so chosen that they were comparably effective in control, i.e., 5txM NAT, 100nM GLY, 100nM GLI and 50nM REP. All test drugs were first dissolved in DMSO to form a stock solution of 10 The incubation media were diluted by factors ranging from lx to 20x depending on the concen-trations of glucose/drugs. The diluted super-natant was assayed for insulin content with SPA. [15] The assay employed commercially available products including a guinea pig antirat insulin specific antibody (Linco Research Inc) and scin-tillation proximity Type I reagent coupled to protein A (Amersham Life Science), and was performed as a single step assay. All samples were assayed in duplicate. adjusted eventually to reflect the degree of dilution. The intra-and inter-assay coeffi'cients of variation were generally between 5% and 8%.
To allow comparison of the hypoglycemic druginduced insulin secretion from overnight SUtreated and untreated islets, data were expressed as percent of the appropriate con-trois. That is to say, insulin secretion stimulat-ed by secretagogues in SU-treated and untreated groups was normalized, respectively, with the amount of insulin without secreta-gogues in SU-treated and untreated groups. This transformation corrected for the increase of control values elicited by overnight treat-ment of SUs. All values were expressed as the mean _SEM. Statistical significance was determined with t-test (single-tailed). P < 0.05 was considered significantly different.
RESULTS
Drug-induced Insulin Secretion from Islets with Overnight Treatment of 100nM GLY It was assumed that prolonged treatment with SUs would impair islet responsiveness to physiological and pharmacological stimuli. The effectiveness of four hypoglycemic agents, NAT, GLY, GLI, and REP, to stimulate insulin release from islets cultured overnight (18 hours) in the presence of 100nM GLY was evaluated and was compared to that from overnight cultured islets without GLY to determine the magnitude of desensitization induced by GLY. In the acute experiments following overnight incubation, the concentrations of test drugs were so chosen that they were comparably effective in stimulating insulin release at glucose concentration of 8mM, i.e., NAT at 5M, GLY at 100nM, GLI at 100nM, and REP at 50nM. These hypoglycemic agents stimulated insulin secretion during 1 hour static incubation from islets cultured overnight without 100nM GLY by a respective 270 44%, 292 42%, 301 26% and 430 +_ 35%. In parallel experiments on islets cultured overnight with 100nM GLY, the stimulation factors were, respectively, 354 + 19%, 302 + 13%, 340 + 12%, and 352 + 7%. These data are illustrated in Figure 2, in which the stimulation factors of all test agents were not statistically different between islet groups without and with pretreatment of 100nM GLY. Thus, prolonged treatment with glyburide at 100nM did not appear to render the islets insensitive to subsequent application of insulin secretagogues.
Drug-induced Insulin Secretion from Islets with Overnight Treatment of 10M GLY
When the concentration of GLY for overnight treatment was raised by 100-fold to 10zM, the pattern of the acute response of islets to insulin secretagogues has changed and the results are shown in Figure 3. The ability of NAT to stimulate insulin secretion did not significantly differ between the GLY untreated control (126 20%) and the GLY pretreated groups (155_ 14%). Conversely, the acute stimulation factors by GLY, REP and GLI in the groups without or 4 Insulin release during hour static incubation in response to acute challenge by antidiabetic agents from rat islets following overnight treatment without (empty bars) and with (filled bars) 1001xM TOL at G8. Data were pooled from 16 independent samples/condition and expressed as % of appropriate controls TOL). *p < 0.05 and **p < 0.005 when compare between data with and without pretreatment.
Basal insulin secretion during static incubation without and with TOL pretreatment was, respectively, 210 22 and 314 16 xU/islet/lh. treatment of islets with or without 100txM TOL are illustrated in Figure 4. NAT was able to stimulate insulin by 240 23% in control and 160 35 after TOL pretreatment. This change was not statistically significant. On the other hand, the stimulation factors in control or TOLpretreated islet groups were, respectively, 257+27% or 134+23% with GLY, 298 +27% or 137 + 13% with REP, and 260+38% or 155 +20% with GLI. The results reinforced the argument that islets desensitized to SUs retain responsiveness to NAT but not GLY, REP and GLI. The changes of insulin secretion in both GLY and TOL pretreated groups are summarized in the Table I.
DISCUSSION
The present study demonstrated that prolonged exposure of pancreatic islets to SUs, GLY or -TOL, impaired their ability to secrete insulin in response to subsequent secretagogue stimulation a phenomenon known as islet desensitization. Similar results have been previously reported for both drugs. [8][9][10]16] As KAT p channel blockade plays key role in the mechanism of action of SUs and all test drugs in this study, one possible mechanism for SU desensitization is that chronic exposure to SUs would result in lasting binding of SU to SUR1 and closing of KATP channels, and in turn, a lasting membrane depolarization. The constant depolarization would lead the ]3-cells to a refractory state, in which they respond less effectively to a further stimulation of closing KATP channels by test secretagogues. Evidence for this mechanism of SU desensitization associated with chronic exposure to GLY has been previously reported. [111 If this were indeed the case, SU desensitization is likely to be a temporary and reversible condition of refractoriness of the ]3-cells to secretagogue stimulation, since KATP channel activity would eventually recover upon withdrawal of SUs.
The key finding in the study was that NAT was the only drug of four test agents which maintained its insulinotropic efficacy in islets desensitized to SUs. Given that islets were still able to vigorously respond to NAT after 18h pretreatment of SUs, the exhaustion of a finite insulin store and/or islet secretory machinery did not appear to be the underlying defect in All data in SU-pretreated and non-pretreated groups are expressed, respectively, as percent of insulin secretion in appropriate control groups without acute stimulation by secretagogues. *p < 0.05 and **p < 0.005 on data with SU treatment compared to those without. desensitized secretion to GLY, REP and GLI. The insulinotropic action of REP, GLY and GLI was desensitized by lasting SU exposure, suggestive of common site of action for these drugs. On the other hand, the inability of both TOL and GLY to affect NAT-induced insulin secretion suggested a distinct mode of action of NAT. Provided that the mechanism of desensitization was indeed associated to the refractoriness of SUR1/KAT P channels in ]3-cells, a possible explanation for the effectiveness of NAT vs. ineffectiveness of GLY, REP and GLI in stimulation of insulin following prolonged pretreatment of SUs would be the existence of distinct binding sites on SUR1 for NAT vs. SUs and REP. It is possible that the site on SUR1 for SUs has become hypo-responsive to GLY, GLI and REP due to its desensitization induced by chronic treatment of GLY or TOL, while the site on SUR1 for NAT remained fully available to be activated with or without pretreatment of SUs. Alternatively, the present data would be reconciled by proposing a second mechanism of the insulinotropic action of NAT that is independent of KATP channels.
Being cautious not to over-interpret the in vitro findings in the present study, we only speculate that some aspects of signaling pathways mediating NAT's effect are markedly dissimilar from those involving in the effects of SUs and REP. Although the present study did not deal with time-dependent secretory pattern but cumulative stimulation of insulin secretion from islets during certain period of time, earlier works by others and us reveale essential differences of NAT action from those of GLY, GLI and REP with respect to (1) the kinetics of the interaction of NAT with SUR1 receptor/KATp channels; [17] (2) the kinetics of in vitrofin vivo insulinotropic action; [18][19][20][21][22] (3) the glucose-dependence; [23,24] and (4) the sensitivity to metabolic inhibitors. [25] These findings collectively indicated the uniqueness of NAT action and possibly a distinct receptor/ effector system(s). By virtue of these properties, NAT is able to ameliorate postprandial hyper-glycemia by augmenting early insulin release, while SUs and REP may preferentially decrease postabsorptive hyperglycemia due to their slower onset and sustained duration of action.
In conclusion, the maintenance of insulinotropic efficacy of NAT in islets desensitized to SUs adds to the list of properties of this agent that distinguish it from other SUR ligands, despite the presumed common basic mechanism of action of said agents. | 3,150 | 2001-01-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Formalization of target and assessment of quality of the cyber-physical control systems for emissions reduction technologies
Monitoring and inventory is a necessary but not sufficient condition to reduce emissions in the energy and industry. The ultimate task of controlling air purification technologies in combination with monitoring is effectively achieved while minimizing the human factor. The creation and application of cyber-physical technology management systems provide a solution to a set of related environmental problems. In addition, the development of models based on cyber-physical systems forms new requirements for the development of technologies for their creation. A special feature of such technologies is close interaction between computing methods with design and production facilities. These issues have not been sufficiently reflected in publications. The article presents the target setting for the creation of cyber-physical technology control systems based on local laws of preservation and the decomposition of the target into macroscopic quality assessments. A mathematical model and methods of quality assessment at the life cycle stages of such systems based on recalculation of probability densities of a priori data are presented. Examples of cyber-physical emission reduction systems in the energy and industry developed under the nature-technology concept are given.
Introduction
Problems and tasks in the air quality problem area include monitoring and management of harmful emissions. Relevant emission monitoring and inventory programs and projects have been established both at the global and regional level [1], and for specific energy and industrial facilities. Well-known publications are usually limited by references to the application of cyber-physical monitoring results for following technologies [2,3]. Monitoring is necessary, but the use of cyber-physical systems (CPS) also involves feedback control tasks. Correct and rational penetration into the physical environment using methods and tools of cybernetics must be accompanied by an assessment of the reactions of the physical environment during its control. In the environment, such ultimate objectives are minimization, neutralization, recycling and other problem and objectoriented air quality assurance technologies. In this article, it is proposed that the target of air quality assurance should be presented in local forms on the basis of conservation laws. Air quality assurance technologies are proposed to be implemented at the life cycle stages of the CPS for control of the achievement of the required values of quality indicators based on the two-level model of the specification characteristics and ensuring their parameters. The article proposes to evaluate the quality of systems based on the main characteristics of the requirements specification at the stages of the life cycle. During the design, construction and technological preparation stages of production, there are all kinds of deviations from the limits of established norms and tolerances, having a random character. The article suggests an approach to formalization and decomposition of the CPS objective, a mathematical model and a method for determination (recalculation) of the density distribution of probability characteristics and parameters at any point in time on given initial a priori data. When recalculating parameters at this stage, not only time can be taken into account, but also the functions of dynamic errors, inconsistencies in changes in parameters and characteristics of technological processes.
Materials and methods
The developing class of systems called CPS is characterized by deep penetration and close, including feedback, connection of methods and means of cybernetics and physical environment [6,9]. The physical environment is subject to research and monitoring for the following control of its facilities. Emissions from energy and industrial units can be fully represented in the physical environment of the CPS.
Moreover, the purpose of the CPS as a necessary attribute of any control system can be formulated with sufficient certainty, including on the basis of objective conservation laws [10]. At the same time, the implementation of the life cycle of the CPS in relation to the above objects is related to the presence of uncertainties and random events. The implementation of the development, design and technological preparation stages of production should take into account random factors to ensure the quality of the CPS. The article consistently considers the approach to formalization of the target based on the models of physical processes and their interpretation in terms of control and formal representation of the target in the form of probability functionality of the requirements of the specification (quality indicators). Further decomposition of the target to the level of parameters that form these quality indicators allows to move to probabilistic quality estimates based on given initial a priori data -probability densities.
Formalization of the CPS target
The target of the CPS in the form of reducing emissions in the energy and industry through technology management can be formulated on the basis of physical laws that summarize many experiments and describe the evolution of the sought quantities, both in space and time. Most problems in physics lead to the need to solve differential equations [11]. It was shown in [4] that the equation of turbulent diffusion and convection corresponds to the character of the emission processes. This equation represents the local form of the law of conservation of mass and, after transformations, is reduced to the "input-output" form necessary for the implementation of the control functions for emission reduction technologies. "Output" is the emissions that are mitigated by the technology control system. The system, in turn, is subject to requirements, the fulfillment of which is represented by the generalized functional of the "maximum probability of quality assurance" [12] including the probability P of meeting the requirements of the technical task (TT) where V0and W0 are "initial" characteristics and criteria. At subsequent stages, they are coordinated up to the "n-th" stage of production preparation.
Decomposition of the target into macro and micro levels
The target in the CPS is decomposed into macroscopic characteristics that serve to assess the quality. Behind these basic characteristics of the CPS there are many different parameters, which are responsible for numerous designers and manufacturers of individual subsystems, units and parts of the CPS. The quality of the CPS is determined by compliance with the requirements. The main characteristics, called here "macroscopic", should not go beyond the limits established by the TT. At the stages of the life cycle, nonconformities are assessed in the form of deviations and errors. Macroscopic assessments at the stages of the life cycle, including the processes of design and production of CPS, are carried out on the measured coordinates depend not only on the parameters of the object itself, but also on the parameters of the design and production process. This is especially true for the stages of technological preparation of production and testing and selection of technological and testing equipment. At these stages, the "developer enterprise" receives from the "manufacturer" notifications about changes in design documentation. These notifications are often spontaneous and random.
Assessment of characteristics
Deviations from TT and errors have the character of random functions, the ordinates of which go beyond the established norms and tolerances. In the theory of random processes, this is reduced to the so-called "outlier problem", the solution of which does not lend itself to correlation theory.
The proposed approach is reduced to solving the following problems.
Determination in the design process of the probability of the ordinates of the measured variables going beyond the limit values of the TT and the duration of exceeding these values in these cases, correlation analysis is inapplicable and it is necessary to obtain the density of probability distributions in time and in time itself. This is due to the fact that these probabilistic characteristics at time tk+sare not related to characteristics at time tk. Assessments obtained in the early stages of design should provide an assessment of possible product defects in production.
Assessment of the state of technological equipment in preparation for production and in the production process.
Monitoring the state of the "project" during its movement from the technical assignment to the installation according to the probabilistic characteristics of the output variables.
Reduction of labor costs for testing by correcting design documentation and improving the technological preparation of production.
When solving these problems, it is proposed to proceed from the representation of the CPS as a dynamic system with the components Y(t) = (yi…yn) and For further transformations, you need to go to the new variables and parameters -Pyand P . When considering at the beginning a particular case, when λ are deterministic, and the initial conditions Y0 are random values, the initial probability density Py,0(y10,…,yn0,t0)
Assessment of quality at the life cycle stages
At the initial stages, it is advisable to develop an enlarged metamodel of the product, according to which a differential equation is constructed for the new variables Py. In the future, this model is transformed into intermediate mathematical models [12]. The initial ( ) 00 , , t PY is set according to a priori data obtained from previous studies, analogs and results of a preliminary analysis of design problems, accumulated in databases [13…15]. In particular, if it is possible to obtain statistical characteristics obtained at the stages of testing elements or according to the data of the manufacturer of component parts, then the "recalculation" of the probability densities of these data in P will allow setting ( ) Equation (2) in symmetric form takes the form Assuming that Eq. The average duration of the ejection, , is estimated by the well-known formulas of the theory of random functions. Estimates using formulas similar to formula (4) are carried out throughout the selected life cycle and represent the probability of achieving the goal formulated as functional (1). The recalculation of the parameters obtained at these stages of the life cycle into the estimated characteristics is carried out on the basis of the known in the theory of the probability of the relationship where Pz-is the probability density of the parameter z, Pxis the known probability density of the parameter x, () zx = is the given design characteristic.
Results
The results obtained make it possible to obtain probabilistic assessments of the quality of systems according to the main characteristics of the technical specifications at the stages of the life cycle. In this case, various kinds of deviations from the boundaries of the established norms and tolerances, which are random in nature, can be taken into account when performing the stages of design, construction and technological preparation of production. The proposed mathematical model makes it possible to determine the probability density distribution of characteristics and parameters at an arbitrary moment in time for a given initial a priori characteristic. When recalculating the parameters at the stage under consideration, not only time can be taken into account, but also the functions of dynamic errors, inconsistencies in parameter changes, and characteristics of technological processes.
Discussion
The tasks of obtaining a priori information remain relevant. Their solution may be associated with the creation and accumulation of the content of knowledge bases in the relevant problem areas.
Conclusion
The given approach, mathematical models and methodology make it possible to ensure the quality control of CPS, the characteristic feature of which is the uncertainty of behavior and the random nature of the processes. The materials in this article are not intended to be complete and will be developed for receiving a priori data, a detailed description of the life cycle and in practical applications. | 2,620.8 | 2020-12-01T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Computer Science"
] |
Differentially Deep Subspace Representation for Unsupervised Change Detection of SAR Images
Temporal analysis of synthetic aperture radar (SAR) time series is a basic and significant issue in the remote sensing field. Change detection as well as other interpretation tasks of SAR images always involves non-linear/non-convex problems. Complex (non-linear) change criteria or models have thus been proposed for SAR images, instead of direct difference (e.g., change vector analysis) with/without linear transform (e.g., Principal Component Analysis, Slow Feature Analysis) used in optical image change detection. In this paper, inspired by the powerful deep learning techniques, we present a deep autoencoder (AE) based non-linear subspace representation for unsupervised change detection with multi-temporal SAR images. The proposed architecture is built upon an autoencoder-like (AE-like) network, which non-linearly maps the input SAR data into a latent space. Unlike normal AE networks, a self-expressive layer performing like principal component analysis (PCA) is added between the encoder and the decoder, which further transforms the mapped SAR data to mutually orthogonal subspaces. To make the proposed architecture more efficient at change detection tasks, the parameters are trained to minimize the representation difference of unchanged pixels in the deep subspace. Thus, the proposed architecture is namely the Differentially Deep Subspace Representation (DDSR) network for multi-temporal SAR images change detection. Experimental results on real datasets validate the effectiveness and superiority of the proposed architecture.
Introduction
Change detection with remote sensing images is the process of identifying and locating differences in regions of interest by observing them at different dates [1].It is of great significance for many applications of remote sensing images, such as rapid mapping of disaster, land-use and land-cover monitoring and so on.Wessels et al. [2] use optical images with the reweighted multivariate alteration detection method to identify change areas, and then update the land-cover mapping.A multi-sensor change detection method between optical and synthetic aperture radar (SAR) imagery is proposed in [3] for earthquake damage assessment of buildings.Taubenbock et al. [4] propose a post-classification based change detection using optical and SAR data for urbanization monitoring.Multi-temporal airborne laser data is used to monitor forest change in [5].In this paper, we tackle the issue of change detection using SAR images.Unlike optical remote sensing images, SAR images can be acquired under any weather condition at day or night; however, there usually are more challenges (i.e., non-linear/non-convex problems) for SAR image visual and machine interpretation due to the coherent imaging mechanism (speckle).
For change detection using remotely sensed optical images, the most widely used criterion is difference operator [1] (for single channel images) or change vector analysis [6][7][8] (for multi-band/spectral images).Due to the temporal spectral variance caused by different atmospheric conditions, illumination and sensor calibration, image transformation has been widely used to yield robust change detection criteria.The core idea of the image transformation is to transform the multi-band/spectral image into a specific feature space, in which the unchanged temporal pixel pairs have similar representations while the changed ones differ from each other.Principal component analysis (PCA) [9][10][11] is one of the state-of-the-art operators for modeling temporal spectral difference of unchanged pixels.Beyond PCA, Kauth-Thomas transformation [12], Gram-Schmidt orthonormalization process [13,14], multivariate alteration detection [15,16] and slow feature analysis [17,18] theories have been used for optical image change detection.However, these algorithms are mainly designed for optical images and usually fail to deal with SAR images with speckle.
Given SAR images, we may meet a more complex situation in which the multi-temporal images are in different feature spaces and changed/unchanged pixels are linearly non-sparable, due to the coherent imaging mechanism.Two main approaches have been developed in the literature: coherence change detection and incoherent change detection.The former uses the phase information of SAR time series to study the coherence map, which has strict limitations for the input multi-temporal SAR images [19].The incoherent change detection more relies on the amplitude or intensity values of SAR data, for instance, the amplitude ratio or log-ratio [20].Improvements have been proposed thanks to automatic thresholding methods [21] and multi-scale analysis to preserve details [22].Lombardo and Oliver [23] propose a generalized likelihood ratio test given by the ratio between geometric and arithmetic means for SAR images.Quin et al. [24] extend the SAR ratio to more general cases with an adaptive and nonlinear threshold, which can be applied to not only SAR image pairs but also long-term SAR time series.Beyond change detection, Su et al. [25] propose a generalized likelihood ratio test based spectral cluster for temporal behaviours analysis of long-term SAR time series.Obviously, non-linear change criteria have been widely used for SAR images in the literature.However, these change criteria usually have noisy results due to the SAR speckle, or face a trade off between spatial resolution and smoothness of detecting results.
Recently, deep learning techniques have been experiencing a rapid growth and have achieved remarkable success in various fields.Given change detection issue using remotely sensed data, a large number of deep network architectures have been proposed.Improved UNet++ [26] is proposed to solve the error accumulation problems in the deep feature based change detection.Ji et al. [27] apply a Mask R-CNN based building change detection network with self-training ability, which does not need high-quality training samples.Dual learning-based Siamese framework in [28] can reduce the domain differences of bi-temporal images by retaining the intrinsic information and translating them into each other's domain.A set of convolutional neural network features [29] have been used to compute the difference indices.Similarly, a spare autoencoder is applied in [30] to extract robust SAR features for change detection.
In this paper, we propose a differentially deep subspace representation (DDSR) for multi-temporal SAR images.The proposed network consists of a non-linear mapping network followed by a linear transform layer to deal with the complex patterns of changed and unchanged pixels in SAR images.The non-linear mapping network is built upon an autoencoder-like (AE-like) deep neural network, which can non-linearly map the noisy SAR data to a low-dimensional latent space.Contrary to normal autoencoder (AE) network, the proposed architecture is trained to minimize the representation difference of unchanged pixel pairs, instead of reconstruction error of the decoder.To better separate the unchanged and changed pixels in the latent space, a single-layer self-expressive network linearly transform the mapped SAR data into a mutually orthogonal subspace.In the transformed subspace, the unchanged pixel pairs have similar representation, while the temporally changed ones are comparatively different from each other.Changed pixels are finally identified by an unsupervised K-Means clustering method [31].Note that a similar idea has been proposed in [32], in which the slow feature analysis [18] is applied to perform the linear transform, instead of our self-expressive network with the back propagation algorithm.
This paper is organized as follows.Section 2 briefly recalls the non-linear/linear subspace approaches.The proposed network is presented in Section 3, which is followed by the evaluation (Section 4) and the conclusion (Section 5).
Related Work
To deal with the nonlinearities in SAR change detection task, the proposed DDSR maps the bi-temporal SAR data into a subspace using a non-linear AE-like network followed by a linear self-expressive layer.The change criterion is computed by the DDSR difference of the input bi-temporal SAR images.Similar ideas have been proposed in the literature.
Deep Subspace Clustering
Ji et al. [33] propose a deep autoencoder framework for subspace clustering, in which a self-expressive layer has been introduced between the encoder and the decoder to learn the pairwise affinities of the input data through a standard backpropagation procedure.Figure 1a gives a brief illustration of this deep subspace clustering network.It provides an explicit non-linear mapping for the complex input data that is well-adapted to the subspace clustering, which yields significant improvement over the state-of-the-art subspace clustering solutions.A structured autoencoder in [34] introduces a global structure prior into the non-linear mapping.These deep subspace approaches mainly focus on the clustering or recognition problems, in which the network weights are well trained to exploit the similar information among the input data, instead of the differential information used for change detection task.Even though these approaches can be easily adapted to change detection task, the performance might not be optimal.In this paper, the proposed architecture discards the decoder network and redesigns the network loss to adapt the SAR image change detection.
Deep Slow Feature Analysis Network
In [32], Du et al. present a slow feature analysis (SFA) theory based deep neural network for optical remote sensing change detection.This network non-linearly maps the input bi-temporal data into a higher dimensional space, as shown in Figure 1b.The classic slow feature analysis (SFA) algorithm is then applied to suppress the unchanged components and highlight the changed components of the mapped data.In our work, the non-linear mapping is our DDSR network, which performs a sparse autoencoder, which compacts the input data to lower dimensional space.In addition, compared with the SFA based linear transformation, the self-expressive layer can be trained to adapt well to the given task and the given dataset by the backpropagation algorithm.
Differentially Deep Subspace Representation (DDSR) for Change Detection
As far as we are concerned, the non-linear transformation for change detection generally outperforms linear ones, which can handle the complex patterns of the input data.Non-linear kernel based methods have also been proposed [35][36][37]; however, it is not clear whether the pre-defined kernels are suitable for SAR image change detection tasks.In this work, our goal is to learn an explicit mapping that makes the changed and unchangded pixel pairs more separable in the transformed subspaces.This section builds our architecture namely differentially deep subspace representation (DDSR) based on the classic autoencoder network.As shown in Figure 2, the non-linear part (the AE-like network) first maps the input bi-temporal SAR data into a low-dimensional latent space.The linear part (the self-expressive layer) further transforms the mapped SAR data to a subspace.Contrary to minimization of the reconstruction error, the proposed architecture is trained to compact unchanged pixel pairs and explode the changed ones in the subspace.
AE-Like Network Based Non-Linear Mapping
Basically, the encoder of the AE network is a classical multi-layer deep neural network.Each layer consisting of an input layer, a hidden layer and a output layer can non-linearly transform the input data to latent features.Given a pair of pixels {x, y} ∈ {R 1 N are corresponding patches with pixel x and y as center, respectively.In the proposed AE-like network, denote the input, hidden and output layer of the neural networks.At the first stage, the patch pair {X, Y} corresponding to pixel pair {x, y} are reshaped to form input vector I (i.e., I X and I Y ).The hidden layer can be computed by where W H ∈ R M×N denotes the weight matrix of the hidden layer, B H ∈ R M denotes the bias and f denotes the activation function performing the non-linear mapping.At the second stage, the latent feature H is mapped to the output by where
Self-Expressive Layer Based Linear Transformation
As shown in Figure 2, the main motivation of the self-expressive layer is based on the PCA and SFA theories.However, unlike PCA or SFA, the linear transformation of our DDSR is learned by the backpropagation algorithm, instead of the classic or generalized eigenvalue decomposition.The data-driven strategy can make the self-expressive aspects more adaptive to the given datasets than PCA and SFA.Let Z ∈ R M×1 and Z ∈ R M×1 denote the input (i.e., the output of the AE-like network) and the output of the self-expressive layer.
where W SE ∈ R M×M denotes the weights of the self-expressive layer.To form a mutually orthogonal subspace, each row vector in W SE has to be orthogonal to any other row vector in W SE .
Network Architecture of DDSR
Since the pixel-wise change detection is strongly affected by the speckle, patch-wise strategy has been applied in this paper, i.e., a square image patch formed by a pixel and its surrounding pixels.Each patch pair {I X , I Y } with center pixel {x, y} is reshaped to vector X ∈ R N×1 and Y ∈ R N×1 (N = 5 × 5 in this paper), as shown in Figure 2. Through the AE-like network (Section 3.1), the input bi-temporal SAR patches X and Y are non-linearly mapped to a lower dimensional latent space, denoted by Z X ∈ R M×1 and Z Y ∈ R M×1 (where N > M).Z X and Z Y are then linearly transformed to Z X and Z Y by the self-expressive layer.The change criterion r between pixel x and y can be calculated by To identify the changed pixels, an unsupervised K-Means cluster is applied to classify {r} into the changed and unchanged groups.
Training Strategy
As shown in Figure 2, the classic AE network is adapted to handle the change detection task.The whole network is trained by minimizing the loss computed from the differential representation of the bi-temporal SAR patches.
where the loss can be calculated by 6) 2 denotes the representation differential, Norm (Z X , Z Y ) is the data constraint term and Regl (W AE , W SE ) is the weight regularization term.The weights Λ = {λ 1 , λ 2 } control the balance terms in the loss function.The data constraint term ensures that the output of DDSR has significant information (avoiding a meaningless solution, i.e., W AE = 0, where E ∈ R M×1 denotes the is a column vector whose elements are all 1.Note that theoretically the non-zero variance constraint is enough.However, for the sake of simplification, the unite-variance constraint is used in the paper.The weight regularization term is calculated by where ||W AE || 2 2 and ||W SE || 2 2 are classic regularization term.The third term controls the orthogonality of W SE , in which Cov(w i SE , w j SE ) is the correlation coefficient between the i-th row vector and the j-th row vector of W SE .Theoretically, the self-expressive layer performs like a PCA or SFA approach, for which the orthogonality is needed to have a complete and non-redundant representation.Without this orthogonality term, the output of DDSR will be a vector of a constant number.
Implementation Details
Since no labeled data is needed in the training stage, our DDSR is unsupervised.However, DDSR makes an assumption that the unchanged pixel pairs are much more than changed ones, since theoretically only unchanged pixel pairs meet the minimization of the proposed loss (Equation ( 6)).A similar assumption has also been used in slow feature analysis (SFA) based unsupervised change detection approach in [18].In addition, this assumption might not hold when the given bi-temporal SAR images have a very long time interval (changed pixels/regions are more than unchanged ones).However, one can easily discard this assumption by introducing a pre-detection strategy (e.g., the classic log-ratio change detection approach) providing some unchanged pixel pairs as training samples.A similar strategy has been used in [30].
Since the proposed network focuses on the change detection task instead of the representation of the classic AE network, the network parameters are firstly initialized randomly, not by a pre-trained AE network.In the training stage, all the patch pairs are fed into the DDSR network.The Adam optimization algorithm is applied to minimize the loss (Equation ( 6)) and obtain the optimal parameters W AE and W SE with 0.1 learning rate.The number of iterations is 1500.In the testing stage, the change criterion r (Equation ( 4)) is computed pixel by pixel.The classic K-Means clustering method is then performed on {r} to group pixel pairs into two groups, in which the group with lower magnitude of cluster center |r| is the unchanged group.
Experiment
In this section, we investigate the effectiveness of the non-linear part (i.e., AE-like network) and test our DDSR network with different parameters, e.g., number of hidden neurons, weights in the loss.Four real SAR datasets are tested in the experiment to evaluate the superiority and advantage of our proposed method.
Datasets and Evaludation Metrics
There are 4 SAR datasets in this experiment.( 1 In order to verify the validity of our proposed method, five metrics are computed to quantitatively investigate the detection results, i.e., Precision (P), Recall (R), Overall accuracy (OA), Kappa coefficient and F 1 .
R = TP TP + FN (10) where TP, FP, TN and FN denote the number of true positives, the number of false positives, the number of true negatives and the number of false negatives respectively, as defined in Table 1.
Analysis of Parameter Setting
As described in Section 3, the hyperparameters are selected before performing the proposed network, i.e., the number of hidden neurons in the AE-like network, the number of layers of the AE-like network and the weights in the loss.The efficiency of learning features may be affected by the number of hidden neurons and the number of hidden layers.The weights in the loss function can reflect the influence of the relation between different constraints and objective functions on the detection results.Thus, comparison experiments have been launched here to investigate the proper hyperparameter setting.Besides, there is a strong link between patch size and image resolution or the size of changed regions.Considering the SAR datasets tested in our exeriments, we choose the patch size as 5×5 by some comparative experiments and keep this patch size in the following experiments.
Number of Hidden Layers and Hidden Neurons
We argue that the number of hidden layers and the number of hidden neurons interact with each other.To choose the parameters of the network, we adopt a grid search method to avoid blindness and randomness, i.e., the number of hidden layers {0, 1, 2, 3} and the number of hidden neurons {10, 25, 50, 100}.The weights in the loss function are λ 1 = 1.0, λ 2 = 1.0.The results are evaluated by Kappa and F 1 .
The change detection performance against the number of non-linear hidden layers and hidden neurons in the AE-like network is shown in Figure 5.It can be seen that the detection accuracy significantly increases with the introduce of the non-linearly mapping by the AE-like network.In addition, the accuracy of change detection gradually increases with the increasing layers.It can be seen that the number of neurons in the hidden layers has a slight effect on the detection results.Consequently, in the following experiments, we perform the AE-like network with 3 hidden layers and 25 hidden neurons by balancing the detection accuracy and the computation complexity.
Tables 2-5 list the Kappa and F 1 of change detection results on dataset Huangshi, Daye, San Francisco and Guangdong.It can be found that λ 1 and λ 2 have great influence on the change detection results.Extremely small λ 1 tends to neglect the variance constraints, which leads to failure of the network training (output the zero weights W AE = 0, W SE = 0).Extremely small λ 2 stands for despising the covariance constraints.There will be lots of redundancy information among the channels of the output.The change detection results may drop up to 5% in terms of Kappa and F 1 , given unbalanced settings of λ 1 and λ 2 .However, this dropping only takes place at the extreme cases, e.g., {λ 1 = 10, λ 2 = 0.01} and {λ 1 = 0.1, λ 2 = 10}.λ 1 = 1.0 and λ 2 = 1.0 thus have been chosen in the proposed experiments.
Parameter Setting and Comparison Methods
In order to verify the superiority and efficiency of our proposed method, different change detection approaches are tested as reference methods in this experiment, i.e., (1) the classic mean ratio operator (MR) [1], (2) NORCAMA [25], a generalized likelihood ratio test based change criterion, (3) SAE + FCM + CNN [30], deep features based change detction.In our proposed approach, we convert each 5 × 5 patch into a vector as the input of our network.The number of neurons in the 3 non-linear AE-like network is 25.Consequently, the number of neurons in the self-expressive layer is 25 as well.The weights in the loss are λ 1 = 1.0 and λ 2 = 1.0.
Experimental Results
The change detection maps are shown in Figure 6-9 and the quantitative metrics are presented in Tables 6-9.From the results, we can find that the classic MR has noisy detection results and the corresponding detection accuracy is lower than other approaches.NORCAMA with the help of pre-denoising operation yields less noisy detection results, however, its detection accuracy is highly depending on the pre-denoising performance.SAE + FCM + CNN achieves a balance between the precision and the recall, which has less noise than the classic MR.However, it heavily relies on the pseudo labels, which may make the final detection accuracy very low when the pre-detection/classification results are poor.The edges of the detection results are very indistinct.Generally, our DDSR network outperforms the reference methods with higher detection accuracy, smooth detection results and clear edges.
Figure 2 .
Figure 2. The differentially deep subspace representation for synthetic aperture radar (SAR) image change detection.The network consists of an encoder with 3 layers (non-linear mapping), a self-expressive layer (linear transform) and a classic K-Means clustering.
Figure 3 .
Figure 3.The encoder network diagram.(a) A simple encoder network consists of only an input layer, a hidden layer and an output layer.(b) A multi-layer encoder network including input layer, two hidden layers, output layer.
) Huangshi dataset as shown in Figure 4a, Sentinel-1 SAR images in Huangshi Hubei, China acquired on 8 October 2014 and 19 December 2014.The spatial resolution is 5 m and image size is 1024 × 1024.(2) Daye dataset in Figure 4b, Sentinel-1 SAR images in Daye Hubei, China acquired on 8 October 2014 and 19 December 2014 with image size of 1024 × 1024.(3) San Francisco dataset in Figure 4c, TerraSAR images in San Francisco, America acquired on 5 December 2007 and 16 December 2007.The spatial resolution is 1m and image size is 1024 × 1024.(4) Guangdong dataset in Figure 4d, TerraSAR images in Guangdong, China acquired on 24 May 2008 and 19 December 2008 with image size of 1024 × 1024.The corresponding ground truth maps are labeled manually, as shown on the right of Figure 4.
Figure 4 .
Figure 4. Datasets tested in the experiments.(a) Huangshi dataset.(b) Daye dataset.(c) San Francisco dataset.(d) Guangdong dataset.From left to right, the bi-temporal SAR images and the corresponding reference change maps.In the reference change maps, the unchanged and changed pixels are gray and white respectively (black is not defined).
Figure 5 .
Figure 5.The influence of the number of non-linear layers and hidden neurons in the AE-like network on the change detection results.The vertical axis represents the Kappa and F1 metrics of the detection results.One horizontal axis denotes the proposed network without the autoencoder (AE)-like network and with the AE-like network containing 1, 2 or 3 hidden layers, the other horizontal axis denotes the number of hidden neurons.Different colors denote different number of hidden layer.(a) Change detection results on Huangshi dataset.(b) Change detection results on Daye dataset.(c) Change detection results on San Francisco dataset.(d) Change detection results on Guangdong dataset.The left represents kappa metrics and the right denotes F1 metrics.
Figure 6 .
Figure 6.Change detection results of Huangshi dataset by (a) mean ratio (MR), (b) NORCAMA, (c) SAE + FCM + CNN, (d) Our proposed approach.The left represents detection result with ground truth mask.The right denotes detection result without ground truth mask.
Figure 7 .
Figure 7. Change detection results of Daye dataset by (a) MR, (b) NORCAMA, (c) SAE + FCM + CNN, (d) Our proposed approach.The left represents detection result with ground truth mask.The right denotes detection result without ground truth mask.
Figure 8 .
Figure 8. Change detection results of San Francisco dataset by (a) MR, (b) NORCAMA, (c) SAE + FCM + CNN, (d) Our proposed approach.The left represents detection result with ground truth mask.The right denotes detection result without ground truth mask.
Figure 9 .
Figure 9. Change detection results of Guangdong dataset by (a) MR, (b) NORCAMA, (c) SAE + FCM + CNN, (d) Our proposed approach.The left represents detection result with ground truth mask.The right denotes detection result without ground truth mask.
Table 1 .
Confusion matrix of change detection results.
Table 2 .
Change detection results of Huangshi dataset with different weights in the loss function.
Table 3 .
Change detection results of Daye dataset with different weights in the loss function.
-denotes fail of the network training.
Table 4 .
Change detection results of San Francisco dataset with different weights in the loss function.
-denotes fail of the network training.
Table 5 .
Change detection results of Guangdong dataset with different weights in the loss function.
-denotes fail of the network training. | 5,703.6 | 2019-11-21T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Computer Science"
] |
AN ANALYTICAL STUDY ON THE EXPORT PERFORMANCE OF DAIRY INDUSTRY IN INDIA
With the rise of factory farming, milk is now almost an unnatural operation. The modern dairy farm can have hundreds, even thousands of cows. Today’s average dairy cow produces six to seven times as much milk as she did a century ago. Currently, the United States is the largest producer of milk in the world, followed by India and China. India being one of the largest milk producer around the world, has to import a part of Milk products and its exports are negligible in the World Export Share. This paper tries to examine the issues regarding ‘Export Performance of Dairy Industry of India’: Trends, Challenges and suggestions for improving the trade situation. The existing Literature has been reviewed accordingly comprising Trade Exports, Imports and the factors which are affecting the Milk Production in country. The Objectives of the study is to find out the reasons for the low per unit production, Imports and negligible exports. In Nutshell, it can be said that there are many unexplored areas in which researchers can explore the findings which can be helpful in achieving trade balance of the Indian Economy.
INTRODUCTION
Indian Dairy Industry is one of the largest and fast growing industries in the country which provide ample job opportunities and contribute significantly to the economy of the country. India now has indisputably the world's biggest dairy industryat least in terms of milk products, last year (2014-15) India produced close to 100 million, 15% more than US and three times as China. India also produces the biggest directory or encyclopedia of any world dairy industry. The dairy sector in the India has shown remarkable development in the past decade and India has now become one of the largest producers of milk and value-added milk products in the world. Projected data suggests that India would even take over European Union by 2020.In India, Uttar
OBJECTIVES
To study the behaviour of major importers. To analyse the export performance of Indian Dairy Industry. To study the competitive advantage of the industry if any. To study Global Demand and export growth opportunities. To assess the implications of research in further economic growth.
LITERATURE REVIEW
The Investigator has surveyed the relevant literature on the subject. The Literature review followed theoretical introduction long and divided into two broad sections; the first section deals with issues and concerns in the export performance of Indian Dairy Industry and second section deals with the review of relevant literature. The Review ends with the research gap, which exists in the earlier research work.
The Literature Review surveyed the literature on the subject critically ending with the identification of research gap, which existed, in the earlier research work and the research question was framed. The literature survey is based on Research Papers, Thesis and Journals. The Section concludes with a summary of entire review. Topic for the Research is analytical and comparative and not much research has been done. India's Dairy Sector could be benefitted if dairy Distortions in developed countries like Canada, the EU countries and the USA are removed. But India also needed some reforms to increase productivity and efficiency in Milk Productivity and Processing. Small scale farmers are also to be linked to the market through cooperative and contract farming to benefit from the increasing domestic and foreign demand. Animal Health and Hygiene factor is also an issue in this regard. Infrastructure facility also needs improvement. This suggests that the export performance for the Dairy Products should be analysed in order to assess the need of reasons for increase in export share of India in this area.
Brijesh Jha in his Thesis Entitled 'India's Dairy Sector in Emerging Trade
Order' has analysed the growth of Dairy Industry in India. Exports are dominated by the US, New Zealand, EU and Australia. India also exports a good amount of milk products. It is often constrained by the arbitrary quality standards of some developed countries. In India dairy Trade has gained importance after trade liberalization of 1991.In 90s Dairy Trade has fluctuated so that it was not easy to assess any trend. The reason for the fluctuation is primly due to World Prices. In total, India is a net importer of Dairy Products in past years of 90s but it has emerged as a net exporter in current years.
In an article published in Economic and Political Weekly entitled Multilateral Trade and Dairy Commodity Markets in LDCs -Recent Evidence from Delhi, C. S. Sundaresan says that the Domestic Market for dairy commodities and the industry in developing countries are exposed to international competition from developed countries, mainly because of heavy subsidies prevailing in the farm sector for production and international trade in the developed countries. Therefore, Developing Countries in Asia are not in position to achieve desired competitive advantage in the International Market. Also, they fulfil the international norms in subsidies and other trade distortion measures in their domestic sectors. Domestic Market Protection prevailing in India proves this rationality for an argument against the international subsidies. Regional diplomatic organisation and governments have to come in forward in order to safeguard the trade interest of member countries.
In an article entitled 'India's Dairy Exports: Opportunities, Challenges and Strategies' by Rakesh Mohan Joshi, India is the largest milk producing country and contributes about 15% of total world milk production. But due to rapid increase in domestic demand of Milk, it became net importer in 2010-12. India's total share in global milk export is 0.68% and share in imports is 0.04%. This trade pattern is directly related to milk production. There are some limitation in increasing Milk Exports from India like; in India Buffalo Milk is much more available than cow Milk whereas people in developed countries prefer cow Milk instead of Buffalo Milk. Large Scale production is not a feature of Indian dairy Industry therefore economies of scale can't be achieved. Per unit production is also very less in India. Also, Domestic demand is high in India which also leads to low surplus for exporting.
GATT has failed in creating fair international agriculture market. Some developed countries protected their high cost products by imposing quantitative restriction and levying import restrictions which enhanced their domestic production and due to high cost these can be disposed-off in International market only with the help of export subsidies. These subsidised sales depress the International market prices which directly affected the exports of developed countries. Again the same problem is found in Developing Countries, their government encourage production by subsidising it. Which causes over-production and disposed-off in International Market at low prices. This causes disruption in International Markets. There are many emerging challenges for Indian Economy which can affect Dairy Exports. As import tariffs and quota restrictions are decreasing, quality barriers may arise in high income countries as to restrict exports from countries like India. Therefore, Dairy exports should be reviewed in order to find out the causes for decrease in export share of India.
Opportunities and Challenges in Indian Dairy Industry Supply Chain: A Literature
Review by Rajeev Kumar and Dr. Raj Kiran Prabhakar suggests tremendous scope of growth in Indian dairy Industry. The Indian dairy industry reported a market size of USD 48.5 billion in FY2011. With Compound Annual Growth of 16%, it is anticipated to reach USD 118 billion in 2017. Currently, Indian dairy market is growing at the rate of 7%. India is one of the largest milk producing country but its share in world trade is negligible. Despite increase in production, there is a demand and supply gap due change in consumption habits and rapid urbanisation. The ever-rise in demand and large demand-supply gap could lead India to be Net-Importer of Dairy Products in future.
Due to the implications in Policy Formulation for increasing growth in Dairy Industry, relevant research should be done.
CONCLUSION
In this paper, an attempt has been made to analyse the trends and challenges of Exports of Dairy Industry in India. The Literature on Exports has been analysed by segregating extant studies on different parameters with focus on Dairy Production Techniques. To conclude, it can be said that the area of Exporting Dairy Products holds promising prospects for future researchers as it has many facets which are still unexplored. Due to large production and very less exports and increasing consumption of Dairy Products, it becomes essential to understand and analyse the most recent trends of Exports and Imports as well in Dairy Sector and find out the measures to increase the exports of Dairy Products. | 2,003.4 | 2016-01-01T00:00:00.000 | [
"Agricultural And Food Sciences",
"Economics"
] |
Multibody System-Based Adaptive Formation Scheme for Multiple Under-Actuated AUVs.
Underwater vehicles' coordination and formation have attracted increasing attention since they have great potential for real-world applications. However, such vehicles are usually under-actuated and with very limited communication capabilities. On the basis of the multibody system concept, a multiple autonomous underwater vehicle formation and communication link framework has been established with an adaptive and radial basis function (RBF) strategy. For acoustic communication, a packets transmission scheme with topology and protocol has been investigated on the basis of an acoustic communication framework and transmission model. Moreover, the cooperative localization errors caused by packet loss are estimated through reinforcement learning radial basis function neural networks. Furthermore, in order to realize formation cruising, an adaptive RBF formation scheme with magnitude reduced multi-layered potential energy functions has been designed on the basis of a time-delayed network framework. Finally, simulations and experiments have been extensively performed to validate the effectiveness of the proposed methods.
1.
On the basis of the multibody system concept, the MAUVs' formation and communication link framework has been established. The connection between AUVs can be viewed as a springs and damping system. An adaptive control strategy has been set up for multiple under-actuated AUVs formation with a desired formation region and magnitude reduced artificial potential function.
2.
On the basis of the MAUVs' formation and communication link framework, the packets transmission scheme has been designed with learning-based multi-layered network topology; the cooperative localization errors caused by packet loss are estimated and modified through reinforcement learning RBF neural networks. 3.
On the basis of the MAUVs' formation and communication link framework, an adaptive RBF formation scheme with magnitude reduced multi-layered potential energy functions has been designed on the basis of the time-delayed network framework. Simulations and experiments have verified the performance of the purposed schemes.
Adaptive Communication Protocol
If we take multiple AUVs' formation as a multibody system, the mobile AUV nodes should be connected and coordinated over network communication. However, constantly varying the nodes' distance and transmission latency could lead to the difficulties in data transmission and relative distance observation. Moreover, the energy consumption is correlated with the data transmission After channel contention and selection, the source AUV node and objective AUV nodes realize time consensus through broadcasting and answering. The source AUV node will send information to the objective AUV node 2 through objective AUV node 1.
Secondly, the source AUV node will send information to the source node. The format of the data package is {RTS/overtime, node_pos, node_speed, destination_node}, which denotes the data After channel contention and selection, the source AUV node and objective AUV nodes realize time consensus through broadcasting and answering. The source AUV node will send information to the objective AUV node 2 through objective AUV node 1.
Sensors 2020, 20, 1943 5 of 17 Secondly, the source AUV node will send information to the source node. The format of the data package is {RTS/overtime, node_pos, node_speed, destination_node}, which denotes the data send request, present position, speed, and destined AUV node (in Figure 3, the objective AUV node 2 is supposed as the destined AUV node). When the "RTS" message has been received by the objective AUV node 1, it will be sent to the objective AUV node 2 immediately. At the same time, objective AUV node 1 will be waiting for the "CTS" message from the objective AUV node 2 or a return timeout frame. When the "RTS" message has been received by the objective AUV node 2, it is informed about the forthcoming message, comes into "response adjustment" status, and sends the "CTS" message to the source AUV node through the objective AUV node 1. When the "CTS" message is received by the objective AUV node 1, it will be transmitted to the source AUV node with the format of the data package as {CTS1/overtime, node1_pos, node1_speed, CTS2/overtime, node2_pos, node2_speed}, which denotes the speed and position of objective nodes. When timeout happens, the source AUV node will send the request again or reselect another objective AUV node.
Thirdly, when the "CTS" message is received, "data" will be sent from the source AUV node to the objective AUV node 2 through the objective AUV node 1. When the objective AUV node 1 received "data", it will come into "Response adjustment" status and send the data package to the objective AUV node 2. After the data has been received, the objective AUV node 2 will return "ACK" to the source node through the objective AUV node 1. The format of the data package is {ACK1/overtime, node1_pos, node1_speed, ACK2/overtime, node2_pos, node2_speed}, which denotes the speed and position of objective nodes. After "ACK" has been received by the source node, the transmission process will terminated.
Protocol for One-Many Contending Topology
The protocol includes a four-way handshaking access method for "RTS", "CTS", "Data", and "Acknowledgment for receiving", as well as "Blocked to Send" packets for waiting control. The "Response adjustment" time includes the time of propagation and process delay. Once a source decides to start transmission through one channel, the handshaking process will start and transmit a "Blocked to Send" to other sources (other AUVs) at the same time (see Figure 4).
Sensors 2020, 20, 1943 5 of 18 send request, present position, speed, and destined AUV node (in Figure 3, the objective AUV node 2 is supposed as the destined AUV node). When the "RTS" message has been received by the objective AUV node 1, it will be sent to the objective AUV node 2 immediately. At the same time, objective AUV node 1 will be waiting for the "CTS" message from the objective AUV node 2 or a return timeout frame. When the "RTS" message has been received by the objective AUV node 2, it is informed about the forthcoming message, comes into "response adjustment" status, and sends the "CTS" message to the source AUV node through the objective AUV node 1. When the "CTS" message is received by the objective AUV node 1, it will be transmitted to the source AUV node with the format of the data package as {CTS1/overtime, node1_pos, node1_speed, CTS2/overtime, node2_pos, node2_speed}, which denotes the speed and position of objective nodes. When timeout happens, the source AUV node will send the request again or reselect another objective AUV node. Thirdly, when the "CTS" message is received, "data" will be sent from the source AUV node to the objective AUV node 2 through the objective AUV node 1. When the objective AUV node 1 received "data", it will come into "Response adjustment" status and send the data package to the objective AUV node 2. After the data has been received, the objective AUV node 2 will return "ACK" to the source node through the objective AUV node 1. The format of the data package is {ACK1/overtime, node1_pos, node1_speed, ACK2/overtime, node2_pos, node2_speed}, which denotes the speed and position of objective nodes. After "ACK" has been received by the source node, the transmission process will terminated.
Protocol for One-Many Contending Topology
The protocol includes a four-way handshaking access method for "RTS", "CTS", "Data", and "Acknowledgment for receiving", as well as "Blocked to Send" packets for waiting control. The "Response adjustment" time includes the time of propagation and process delay. Once a source decides to start transmission through one channel, the handshaking process will start and transmit a "Blocked to Send" to other sources(other AUVs) at the same time (see Figure 4). At the first stage, when the RTS frame is received, the destination is notified for the forthcoming transmission. The destination goes to the "Response adjustment" state to receive the packets from its neighbor through the selected channel. A block to send is transmitted to other neighbors so as to alert potential interferers that this channel will be busy for the whole carrying time before it can cause a collision.
At the second stage, the source waits until receiving either "CTS" or a timeout frame. When a timeout occurs, the source is back to the channel contention and selection state. Obviously, the propagation delay between a frame and its "Response adjustment" is at least equal to the length of the frame to be transmitted/received in it so that the node response can be dealt with one after At the first stage, when the RTS frame is received, the destination is notified for the forthcoming transmission. The destination goes to the "Response adjustment" state to receive the packets from its neighbor through the selected channel. A block to send is transmitted to other neighbors so as to alert potential interferers that this channel will be busy for the whole carrying time before it can cause a collision.
At the second stage, the source waits until receiving either "CTS" or a timeout frame. When a timeout occurs, the source is back to the channel contention and selection state. Obviously, the propagation delay between a frame and its "Response adjustment" is at least equal to the length of the frame to be transmitted/received in it so that the node response can be dealt with one after another. Thus, the transmission of an "RTS" frame and reception of a "CTS" frame are two actions that have the same maximum single-trip propagation delay, Pmax. If we define the fixed length gap between a control frame and its consequent frame as "CML", thus, the gap at the source between RTS and CTS is called CMLRTS, and the gap at the destination between "CTS" and "Data" is called "CMLCTS". We define: for the worst propagation scenario. After receiving the RTS frame, the destination then uses the distance information measured from the "RTS" frame to calculate the time to reply with a "CTS" frame so that the "CTS" frame reaches the source after a "CML" space can be counted. During the gap of "CML", a potential interferer is avoided for collision-free transmission. Once the "Adjusted Response" state finishes, the source sends the data packets through the corresponding channel and goes to the "ACK" state. In summary, the second stage allows the destination to negotiate with the source, which gives both the source and the destination more flexibility and therefore reduces the chance that the destination fails due to channel collision. The third stage starts as soon as the "CTS" frame is actually received. During this stage, if the destination receives "Data" from the source, it goes to the "Response adjustment" state to verify that the data packet is coming from the source. Otherwise, a timeout occurs.
At the fourth stage, "ACK" for the corresponding data packets are sent through the selected channel once the "Response adjustment" state finishes. After receiving the first ACK packet, the source finishes its transmission process. The BTS values are reset, and the node goes to a "Channel request" state if there are packets to transmit.
RBF Learning Network for Localization Errors Estimation
The sound propagation loss is one of the major reasons for cooperative localization errors. It is composed mainly of three aspects: namely, geometrical spreading, attenuation by absorption, and the anomaly of propagation: 10logA where α is the absorption coefficient in dB/km, k represents the geometrical spreading factor, l represents the transmission range, and f represents the signal frequency.
If we set N t as the turbulence noise, N v as the vehicle noise, N w as the wind driven wave noise, and N th as the thermal noise, therefore, we obtain the channel capacity as: where B is the bandwidth and P tx is the signal transmission power. MAUVs in the formation should not only keep the formation configuration to realize purposed missions, but also avoid collision with obstacles. The formation shape and relative distances maintenance are important. If we set p c as the formation center, one obtains: Sensors 2020, 20, 1943 7 of 17 Each AUV can acquire a geometric center by communicating with its neighbors so as to keep the formation. Hence, the error between p c and the desired center, T is the desired center of the formation region: T is the RBF neural network to estimate three dimensional cooperative localization errors caused by the data transmission packets loss and measurement noise.Ŵ = w 1 , . . . , w N h is the weight vector, while s i represents the input, including the packet loss, delay, current relative distance and between the AUVS, throughput, and current AUV speed.
The output of the RBF neural network can be expressed in the following: where N h , N i , and N o represent the number of hidden layers, input layers, and output neurons. w im and ξ mk denote the network weights, δ ξj and δ wj represent the threshold offsets, and σ() denotes the Gaussian function: where r i is the center vector of the receptive field. w im can be obtained through the following reinforcement learning algorithm.
w(s(t), a k (t)) = w(s(t), a k (t)) + α[r(t + 1) + γw * (s(t + 1)) − w(s(t), a k (t))] In this algorithm, the action is taken on the packets transmission episode. The actions are chosen through the ε greedy strategy. If ε >> 0, the actions are taken randomly of a(t) ∈ U(a min , a max ). When ε << 1, the system exploits the knowledge through selecting the actions. The actions are selected through the comparisons between a random value of x ε ∈ U[0, 1] and ε: The actions represent the power transmission levels. The state is a combination of transmission energy E trans and channel transmission error evaluation, P error : where B error is the bit error rate and N bit is the number of bits in the packet [13]. If each transmission action attempts to transmit the total packets, the rewards are defined as a combination of packets reception and energy power levels: where π is the quantization step size factor between two consecutive quantization levels. pr(t) and p Ediss (t) are the packets reception levels and energy dissipation levels, respectively, while m pr is the number of quantized pr(t) levels. If one defines Sensors 2020, 20, 1943 8 of 17 where β is the maximum speed of desired trajectory p d , β = max(p d ), ⊗ is the Kronecker product. Then, the derivative of the error is given by: where .
Formation Shape Maintenance with Potential Field
Potential functions play a great role in helping AUVs move along the desired gradients directions and finally stabilize at the local minima. The following will define the layered potential functions' shape for the AUVs to reach the desired region and maintain a formation shape (see Figure 5).
where η iol = η i − η ol , η ol is a constant reference point of the l-th desired region, l = 1, 2, . . . , m, and m is the total number of objective functions. f Sl (δη iol ) represents the scalar functions with continuous partial derivatives. From Equation (1), the desired range of AUV motions in the formation is defined as a cylindrical and ring-shape region. For each AUV p i , the desired region is the ring centered around p d c between R 1 and R 2 with height h. Therefore, the scalar attractive forces of the shape function can be defined as follows.
Layer 1 : Hence, the center of the desired formation region is: If k l is set as a positive constant, the traditional potential energy function for the desired formation regions in Figure 5 is: In the consideration with the under-actuated characteristic of AUV, the potential energy functions' magnitude produced from three-dimensional distances have been reduced to improve the scheme robustness and convergence. On the other hand, since the rudder angle is significant for under-actuated AUV to arrive at desired positions, the yaw error of AUV formation appears to be more important. (1), the desired range of AUV motions in the formation is defined as a cylindrical and ring-shape region. For each AUV i p , the desired region is the ring centered around d c p between 1 R and 2 R with height h. Therefore, the scalar attractive forces of the shape function can be defined as follows. Figure 5. The layered region for AUV formation and collision avoidance. Figure 5. The layered region for AUV formation and collision avoidance.
Thus, the region error for the i-th AUV is defined as follows.
For the collision avoidance conditions, the repulsive forces between AUVs or AUVs and obstacles are defined in the form as: where p oi is the position vector of the i-th obstacle, the energy functions are defined on the basis of the collision avoidance region: where δη ij = η i − η j , g 1ij , g 2ij , . . . , g NLij are the functions for the first layer, second layer, . . . , and the innermost layer, respectively, and these layers are continuous and differentiable, while N is the number of layers, and R i1 > R i2 > . . . R iN denote the radius of the first, second, and innermost layers, respectively. Similar to the equations shown in (19), the collision avoidance energy functions have been magnitude reduced as: . . .
where k Nij > · · · > k 2ij > k 1ij are positive constants. The potential energy for collision avoidance between the i-th and j-th vehicle is: and If we set L i as positive definite matrices, the estimated parameterλ i is updated as: Therefore, In order to prove the stability of the RBF-based adaptive formation scheme, we obtain a Lyapunov-like function for the multiple AUVs system as: We obtain from Equations (20), (31), and (32): .
, the last term of the Equation (36) can be rewritten by using Equation (25): From Equation (22), we can obtain g hij δq ij = g hji δq ji and ∂g hij δq ij Thus, the last term of Equation (35) can be written as , W * k,i denotes the ideal constant weights. Therefore, the time derivative of the Lyapunov function in Equation (37) is (28), ∆ρ ij → 0 . Since as t → ∞ , all the error terms are summing yields: Since the interactive forces between AUVs are bi-directional, the summation of all the interactive forces in the systems is zero, we obtain: One trivial solution of Equation (43) This means that some AUVs must be on the opposite sides of the desired region. When there are interactions or coupling among the AUVs from different sides of the desired region, a reasonable weightage can be obtained for ∆ξ i by adjusting α i . Finally, since s i → 0 and ∆ξ i → 0 , we can conclude from Equation (28) that ∆ρ ij → 0 . Hence, all the AUVs are synchronized to the same speed and maintain constant distances among themselves at steady state.
Simulations and Experiments
In order to analyze and verify the designed communication link framework and formation scheme, simulations and experiments have been launched. In the formation simulations of Figures 6 and 7, comparisons have been made on the proposed adaptive formation scheme with and without the RBF neural network. The disturbance is set with a current speed as 0.1 m/s in the west direction. The simulation includes the formation along a round curve and cruising in the confined channel. Their communications are simulated in the NS-2 simulator on the basis of the communication protocol of Section 2. The formation control simulation platform was established on the basis of AUV hydrodynamic equations.
information. The multibody system-based potential field can help MAUVs maintain and change their formation shape according to the environment. The protocol for one-many contending topology and linear topology have been applied and switched according to the shape requirements.
Offshore experiments of MAUVs formation coverage exploration are illustrated in Figure 8. The vehicles were given folding lines with a 90-degree yaw path to test the formation performance of heterogeneous AUVs. The three AUVs can keep their formation while cruising under the strategies proposed in this study. In Figure 6, the three AUVs are planned to follow a round curve with a line shape, e.g., the followers are planned to maintain the same distance one after another. The protocol for linear topology has been applied for the formation communication on the basis of the network framework of Section 2. Since the radius of the trace curvature is greater than the radius of the AUVs' gyration, these three AUVs can keep formation cruising precisely. The package loss and data transmission throughput are illustrated in Figure 6b; one can improve the cooperative localization accuracy through reinforcement learning RBF neural network and therefore improve the formation stability. From Figure 6c, the reinforcement learning RBF neural network can compensate and reduce the cooperative localization errors caused by communication loss through Equations (12)- (14).
Channel cooperative exploration is one of the important applications, and it is very difficult for MAUVs because of the change of channel size and curve. Through the reinforcement learning RBF neural network, the MAUVs' formation can obtain more accurate cooperative localization information.
The multibody system-based potential field can help MAUVs maintain and change their formation shape according to the environment. The protocol for one-many contending topology and linear topology have been applied and switched according to the shape requirements.
Offshore experiments of MAUVs formation coverage exploration are illustrated in Figure 8. The vehicles were given folding lines with a 90-degree yaw path to test the formation performance of heterogeneous AUVs. The three AUVs can keep their formation while cruising under the strategies proposed in this study.
Conclusions
MAUVs' formation is of great significance for marine surveys and exploration. In order to realize MAUVs' formation, this study has focused on their communication and formation. On the basis of the multibody system concept, the MAUVs' formation and communication link framework has been established with an adaptive RBF strategy. The connection for communication and formation between AUVs can be viewed as a springs and damping system. The packets transmission scheme has been designed with multi-layered network topology, which reduces the packets' loss rate and improves the throughput of the network. Moreover, through the reinforcement-learning RBF neural networks, an adaptive RBF formation strategy can be improved with more accurate cooperative localization information. Simulations and offshore experiments with multiple heterogeneous under-actuated AUVs testify the performance of proposed method.
Conclusions
MAUVs' formation is of great significance for marine surveys and exploration. In order to realize MAUVs' formation, this study has focused on their communication and formation. On the basis of the multibody system concept, the MAUVs' formation and communication link framework has been established with an adaptive RBF strategy. The connection for communication and formation between AUVs can be viewed as a springs and damping system. The packets transmission scheme has been designed with multi-layered network topology, which reduces the packets' loss rate and improves the throughput of the network. Moreover, through the reinforcement-learning RBF neural networks, an adaptive RBF formation strategy can be improved with more accurate cooperative localization information. Simulations and offshore experiments with multiple heterogeneous under-actuated AUVs testify the performance of proposed method. | 5,406.8 | 2020-03-30T00:00:00.000 | [
"Computer Science"
] |
Euclidean correlators at imaginary spatial momentum and their relation to the thermal photon emission rate
The photon emission rate of a thermally equilibrated system is determined by the imaginary part of the in-medium retarded correlator of the electromagnetic current transverse to the spatial momentum of the photon. In a Lorentz-covariant theory, this correlator can be parametrized by a scalar function 𝒢R(u·𝒦,𝒦2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal{G}_R(u\cdot \mathcal{K},\mathcal{K}^{2})$\end{document}, where u is the fluid four-velocity and 𝒦\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal{K}$\end{document} corresponds to the momentum of the photon. We propose to compute the analytic continuation of 𝒢R(u·𝒦,𝒦2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \mathcal{G}_R(u\cdot \mathcal{K},\mathcal{K}^{2})$\end{document} at fixed, vanishing virtuality 𝒦2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \mathcal{K}^{2}$\end{document}, to imaginary values of the first argument, u·𝒦=iωn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ u\cdot \mathcal{K} = i\omega_{n}$\end{document}. At these kinematics, the retarded correlator is equal to the Euclidean correlator GE(ωn,k=iωn)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ G_{E}(\omega_{n},k=i\omega_{n})$\end{document}, whose first argument is the Matsubara frequency and the second is the spatial momentum. The Euclidean correlator, which is directly accessible in lattice QCD simulations, must be given an imaginary spatial momentum in order to realize the photon on-shell condition. Via a once-subtracted dispersion relation that we derive in a standard way at fixed 𝒦2=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \mathcal{K}^{2}=0$\end{document}, the Euclidean correlator with imaginary spatial momentum is related to the photon emission rate. The relation allows for a more direct probing of the real-photon emission rate of the quark-gluon plasma in lattice QCD than the dispersion relations which have been used so far, the latter being at fixed spatial photon momentum k and thus involving all possible virtualities of the photon.
Introduction
The electromagnetic radiation emitted by a medium is one of its important characteristics. Under fairly general conditions, the spectrum of emitted photons can be considered a weakly coupled probe of the medium. Here we will be concerned with a relativistic and thermally equilibrated medium. Specifically, we will have the quark-gluon plasma in mind, the high-temperature phase of strong-interaction matter. The quark-gluon plasma at a temperature of 200 to 500 MeV is studied experimentally in heavy-ion collisions, and the spectrum of photons produced in the collisions has been measured at several center-of-mass energies [1,2]. A closely related observable is the spectrum of detected lepton pairs (e + e − or μ + μ − ), which are produced via an off-shell (time-like) photon. These measurements make the electromagnetic spectral function of the medium the only one that is to date experimentally accessible. As a further important motivation, the quark-gluon plasma was present in the first microseconds of the Universe. In addition to photons, various weakly interacting particles may have been produced at that epoch by similar a e-mail<EMAIL_ADDRESS>mechanisms (see [3] for the thermal field theory aspects). Some of them, such as keV-scale sterile neutrinos (see [4] for a review), could constitute (part of) the dark matter in the universe.
The calculation of the photon emission rate by a strongly coupled medium such as the quark-gluon plasma is a challenging task. The asymptotic freedom property of QCD implies that weak coupling methods [5,6] become reliable at sufficiently high temperatures. However, the convergence of the perturbative series at experimentally accessible temperatures is doubtful. In contrast, the real-time AdS/CFT correspondence [7] allows for calculations of transport coefficients and the photon emission rate at very strong coupling. These calculations however cannot to date be performed in QCD, rather they are performed in theories whose thermal properties are in many instances found to be similar to those of the quark-gluon plasma [8]. They thus provide a qualitatively different picture of the high-temperature phase of non-Abelian gauge theories, mainly characterized by the absence of quasiparticles. Other theory approaches have been followed to make realistic predictions for the photon rate in heavy-ion collisions; see e.g. the recent refs. [9][10][11][12][13][14]. Finally, lattice QCD simulations can deliver correlation functions related in a known way to the photon emission rate. Actually determining the latter from the correlation functions however involves a numerically ill-posed inverse problem related to the use of the imaginary time (Matsubara) formalism in lattice QCD; see [15][16][17] for recent calculations of this type. The correlation functions used so far are at fixed spatial momentum, and their relation to the production or absorption of photons involves all possible photon virtualities. This feature is somewhat unfortunate, because the processes in the medium leading to the production of a dilepton pair with a large invariant mass are quite different from those producing a real photon. The former process occurs already for non-interacting quarks, while, to leading order in the fine-structure constant α ≈ 1/137, real-photon emission starts only at O(α s ), where α s = g 2 4π is the strong coupling constant.
From a field theory point of view, the photon emission rate of a thermally equilibrated system is determined by the imaginary part (i.e., the spectral function) of the retarded correlator of the electromagnetic current in the medium. The retarded correlator can be parametrized by the (spatially) longitudinal and the transverse Lorentz scalar functions G T,L R (u · K, K 2 ), where u is the fluid fourvelocity and K corresponds to the momentum of the photon. The first argument thus corresponds to the photon energy in the rest-frame of the fluid. In this article, we propose to compute the analytic continuation of G R (u·K, K 2 ) at fixed, vanishing virtuality K 2 , to imaginary values of the first argument, u · K = iω n . At these kinematics, the retarded correlator is then equal to the Euclidean correlator G E (ω n , k = iω n ), whose first argument is the Matsubara frequency and the second is the spatial momentum. The latter must be given an imaginary value in order to realize the condition K 2 = 0. Via a dispersion relation at fixed K 2 = 0, the Euclidean correlator with imaginary spatial momentum is related to the photon emission rate.
At large spatial separation in the plasma, the positionspace correlator with a definite Matsubara-frequency falls off rapidly, as as sum of exponentials with characteristic screening lengths; their inverses are called screening masses. One effect of the imaginary spatial momentum in the Euclidean correlator is then to enhance the contribution of the low-lying spectrum of screening states in each non-static Matsubara sector, highlighting the importance of understanding precisely that spectrum. In QCD, we will show that the low-lying screening states contribute at O(α s ) to the imaginary-momentum correlator [18].
Anticipating our main result, the spatially transverse Euclidean correlator with Matsubara frequency ω n and imaginary spatial momentum k = iω n , denoted H E (ω n ) ≡ G E (ω n , k = iω n ), is related by a once-subtracted dispersion relation to the spectral function at vanishing virtuality via the equations (1) The differential photon emission rate per unit volume of quark-gluon plasma is determined by the spectral function via [19] with β ≡ 1/T the inverse temperature. The ideas involved in arriving at eq. (1) are broadly related to the work [20], which considers the bremsstrahlung energy loss of high-energy partons moving in the quarkgluon plasma at weak coupling, and more particularly O(g) corrections which are shown to be accessible directly from a computation within the Matsubara formalism. These ideas were put into practice in actual lattice calculations [21]. Euclidean correlators with an imaginary frequency (i.e., momentum in the time direction) were proposed in [22] to access for instance the forward Compton amplitude in a kinematic regime where it is purely real; see [23] for a recent lattice calculation thereof. The η c and π 0 → γ ( * ) γ ( * ) [24][25][26] transition form factors were successfully calculated in lattice simulations using these ideas. In a numerical treatment within the Matsubara formalism, one is however restricted to real, discrete values of the Euclidean frequency. To achieve a vanishing photon virtuality, the only option is therefore to use an imaginary momentum in a spatial direction. We introduce the relevant notation and relations and derive eq. (1) in sect. 2. We then discuss further aspects of the Euclidean correlator at imaginary spatial momentum and perform tests of the dispersion relation in sect. 3. Final remarks are collected in sect. 4.
Derivation of eq. (1)
We begin with some definitions and consider the full set of retarded current-current correlators, where O = 1 Z Tr{e −βu·P O} denotes the canonical thermal average for a fluid with four-velocity u, and the square brackets denote the commutator of two field operators. The set of operators P μ , with P 0 = H the Hamiltonian, is the energy-momentum vector. Our convention is that j μ = f Q fψf γ μ ψ f , where the Minkowskispace Dirac matrices satisfy {γ μ , γ ν } = 2η μν with η μν = diag(1, −1, −1, −1). Furthermore, the four-velocity satisfies u 2 = 1 and takes the form u μ = η μ0 in the rest frame of the fluid. Using current conservation, it is convenient to consider the decomposition The projector P μν T projects onto the linear subspace orthogonal to K and u, and is the projector onto the space transverse to K μ . In the vacuum, G T R = G L R and the polarization tensor is proportional to P μν (K).
We consider the linear combination [27] This linear combination vanishes in the vacuum as a combined consequence of Lorentz symmetry and current conservation. As a consequence, it is ultraviolet finite.
Preserving the Lorentz covariance in the decomposition of the polarization tensor exhibits an analogy with the forward, spin-averaged Compton amplitude of the nucleon 1 . The latter is also parametrized by two invariant functions T 1,2 (ν = u · q, q 2 ) of ν, which corresponds to the photon energy in the frame in which the nucleon is initially at rest (u is the four-velocity of the nucleon), and of the photon virtuality q 2 . In that context, a dispersion relation in the variable ν is written for the Compton amplitudes at fixed photon virtuality; the relevant spectral functions are the structure functions F 1,2 (ν, q 2 ). By contrast, in the thermal QCD context, dispersion relations at fixed spatial photon momentum have been used, and we review those in the next subsection. We derive a fixedvirtuality dispersion relation for the case of real photons in sect. 2.2.
Rest frame of the fluid and dispersion relation at fixed spatial photon momentum
In the rest frame of the fluid, we write the polarization tensor as G μν R (ω, k). The ultraviolet-finite linear combination above then reads with ω = u · K, k i = K i and k 2 ≡ k · k. Current conservation implies so that for light-like kinematics, ω = k, the spatially longitudinal component of the polarization tensor vanishes and coincides with the spatially transverse component of the polarization tensor. We define the spectral function corresponding to the correlator (9) as The dispersive representation of the real part of the retarded correlator at fixed k 2 reads where P indicates that the principal value integral should be taken. We have made use of the fact that ρ(ω, k) is an odd function of the frequency ω. Due to the fact that ρ(ω, k 2 ) ∼ k 2 /ω 4 at large ω for fixed k 2 [27], no subtraction terms are required to make the dispersive integral convergent. We also define the spectral function at fixed, vanishing photon virtuality by in terms of which the photon emission rate is obtained according to eq. (2). The dispersive representation (13) can be derived in two ways: the first is to use the spectral representation of the retarded correlator, obtained by inserting a complete set of states inside the operator product V μ (x)V ν (0) (see for instance [28]); the other is to inspect the analytic properties of the retarded and the advanced correlators (the latter is obtained by replacing θ(x 0 ) by −θ(−x 0 ) in eq. (3)). In this derivation, eq. (13) is the result of expressing an analytic function via Cauchy's theorem. This type of dispersion relation, namely at fixed spatial momentum, hold both for relativistic and nonrelativistic theories. However, causality has stronger consequences in a relativistic theory: the commutator of two local field operators vanishes at all space-like separations. This allows one to obtain a dispersion relation in the photon energy at fixed photon virtuality K 2 ≡ ω 2 − k 2 .
Dispersion relation at fixed photon virtuality
We recall the standard derivation of the dispersion relation at fixed, vanishing photon virtuality [29]. In view of the vanishing of the commutator outside the light-cone, The idea is then to define the function which is analytic everywhere, except for a discontinuity on the real axis. The discontinuity is given by which corresponds to the Fourier transform of the commutator over the entire time axis. We then represent the correlator as a contour integral in the complex plane via Cauchy's theorem. The chosen contour is displayed in fig. 1. Thus the function H(ω) just above the real axis, where it coincides with H R (ω), can be obtained as a contour integral, We have anticipated that a subtraction is necessary to make the representation convergent. Indeed, in the examples considered in sect. 3, σ(ω) grows like ω γ with 1 2 ≤ γ < 1 at large frequencies. Taking the real part, we obtain the representation of the real part of H R (ω) in terms of a principal value integral over the imaginary part of H R (ω). Using the fact that σ(ω) is odd, the dispersion relation can be rewritten This is the master relation that allows one to probe the photon emission rate via the real part of the retarded correlator.
At this point, one may wonder whether the subtraction point ω r can be sent to zero. For this to be useful, the behavior of H R (ω) must be known in this regime. According to hydrodynamics, the longitudinal part of the retarded correlator is dominated in the limit of small ω, k by the diffusion pole (see, e.g., [28]), where χ s is the static susceptibility and D the diffusion coefficient. Since the combination (δ ij − 3 k i k j k 2 )G ij R (ω, k) vanishes in the limit k → 0, the spatially transverse part must be given in that limit by This equation is the content of the Kubo formula for the diffusion coefficient 2 . We note that the real part of the retarded correlator vanishes in the limit k → 0 at fixed ω. What happens generically in the light-like limit (ω = k) → 0 is not entirely clear to us. As described in sect. 3.4, in this limit H R (ω) tends to zero in the N = 4 SYM theory at strong coupling and large-N c , while H R (ω) is simply constant and non-vanishing for ω = 0 in the theory of non-interacting quarks. A dedicated study of this limit in kinetic theory along the lines of [30] could be illuminating.
Euclidean correlator at imaginary spatial momentum
In the Matsubara formalism, one considers the momentum-space correlators where ω n = 2πT n and J μ (x) = f Q fψf γ E μ ψ f , the Euclidean Dirac matrices satisfying {γ E μ , γ E ν } = 2δ μν , i.e. the spatial gamma matrices are Hermitian as opposed to anti-Hermitian. The Euclidean correspondent of the linear combination (9) then takes the form so that [28] G E (ω n , k 2 ) = G R (iω n , k 2 ), n > 0.
Extending the function to imaginary spatial momentum, we have This continuation is possible straightforwardly if the integral over the spatial volume still converges. Thus, translating eq. (20) above into Euclidean notation, we have , n,r = 0. (28) We expect this relation to be useful because the lefthand side can be computed by standard techniques in lattice QCD. The reference frequency ω r would be set to ω 1 = 2πT . The left-hand side thus probes the real-photon spectral function, which for typical thermal photon energies is of order α s (πT ) at high temperatures. Indeed, we shall review in sect. 3.3 that the left-hand side vanishes for non-interacting quarks. In QCD at temperatures below 1 GeV however, the perturbative power-counting may however not be reliable.
Mathematically speaking, by Carlson's theorem of complex analysis the knowledge of H E (ω n ) for all n ≥ n 0 , where n 0 is any natural number, uniquely determines the spectral function σ(ω) [28,31,32]. Numerically, this complete knowledge cannot be achieved. Nonetheless it is very interesting to compute a quantity in lattice QCD which is directly sensitive to interactions in the quark-gluon plasma and is directly related to the observable photon emission rate.
Further aspects and tests of the dispersive representation
In this section, we illuminate further aspects of the Euclidean correlator at light-like momenta. We begin by discussing the infrared contributions to the correlator in terms of non-static screening states. In the second subsection, we comment on a technical issue that arises due to the fact that the lattice regularization breaks the Lorentz symmetry. We then perform tests of the dispersion relation in the theory of non-interacting quarks and in the case of correlators obtained via AdS/CFT methods. The latter correspond to a strongly coupled plasma.
Representation of H E (ω n ) through non-static screening states
At light-like kinematics, only the spatially transverse part of the polarization tensor contributes to H E (ω r ). Thus, we consider the transverse "screening" correlator, which has a representation in terms of the energies E (r) n and amplitudes A (r) n of non-static screening states. The lowest level E (r) 0 in the Matsubara sector ω r determines the asymptotic spatial fall-off of the correlator. This lowlying screening spectrum has been studied in [18] both at weak coupling and in lattice QCD. In terms of these levels, the correlator at light-like kinematics H E (ω r ) can be written as where c.t. stands for contact terms originating from the region around x 3 = 0. Given that both A (r) 0 and E (r) 0 − |ω r | are of order g 2 (with g the QCD gauge coupling), the low-lying spectrum makes an order g 2 contribution to H E (ω r ). This helps explain the connection observed in [18] between non-static screening masses and the Landau-Pomeranchuk-Migdal (LPM) resummed contributions to the photon emission rate as computed in [33]. In particular, the lattice results [18] for E (1) 0 were found to be in line with the prediction based on an effective theory, EQCD. In the latter theory, the quark fields are treated as "heavy" degrees of freedom, due to their minimal Matsubara frequency of πT , and the non-static modes of the gauge field are integrated out. On the other hand, the amplitude |A (1) 0 | 2 obtained from the lattice simulation was found to be a factor (7.7 ± 2.9) larger than its EQCD prediction. This would indicate a much stronger infrared contribution to the photon emission rate.
It is worth mentioning that E (r) 0 − |ω r | cannot be negative. An argument why it must be so goes as follows. The position-space Euclidean correlator of any two local currents A and B can be represented by a Fourier series, Then the Wightman correlator for space-like separations, t 2 −x 2 < 0, being the analytic continuation ofĜ E (x) back to real time, is given by the Fourier coefficients according to E (ω , x). is well-defined in the infrared. Both weak-coupling and lattice calculations indeed find that E (r) 0 − |ω r | is strictly positive [18]. Non-interacting massless quarks constitute a limiting case, where the low-lying states form a qq continuum in a p-wave and a direct calculation [18] shows for instance that G T E (ω 1 , x 3 ) ∼ e −ω1|x3| /(x 3 ) 2 at long distances. The additional suppression by the second inverse power of x 3 suffices to make H E (ω 1 ) infrared-safe.
Lorentz symmetry and the correlator H E (ω n ) on the lattice
An ultraviolet issue in the calculation of the light-like correlator H E (ω n ) arises on the lattice. In the continuum, G E (ω n , k 2 ), and hence H E (ω n ) = G E (ω n , −ω 2 n ) vanish identically in the vacuum. This property however relies crucially on Lorent symmetry. Since the lattice regulator breaks this symmetry, short-distance contributions on the lattice can spoil the continuum limit of H E (ω n ).
One safe remedy against this problem is to explicitly subtract from the thermal lattice correlator H E (ω n ) the corresponding vacuum lattice correlator H E,vac (ω n ), obtained at the same bare parameters (quark masses and gauge coupling). This is appropriate, since the vacuum correlator vanishes in a Lorentz-symmetry-preserving regularization. The subtraction has the effect that, in an operator-product expansion analysis of the short-distance contributions to H E (ω n ) performed in lattice perturbation theory, the contribution of the unit operator cancels out and the remaining short-distance singularities of the vector correlator are integrable. In this way, one is not relying on Lorentz symmetry for the cancellation of short-distance divergences, but only on the absence of dimension-two gauge-invariant operators in the theory, which is guaranteed by the exact SU (3) gauge symmetry of lattice QCD. With the vacuum-subtraction in place, and with an onshell O(a) improved lattice discretization of the action and the vector currents [34], we expect the left-hand side of the master relation (28) to approach its continuum limit with O(a 2 ) corrections.
Test of the dispersion relation for non-interacting quarks
In this subsection, we use a Euclidean notation, set the quark electric charge to unity and consider the case of N c non-interacting Dirac fermions of mass m. The polarization tensor is then given by where Γ F is the set of fermionic Matsubara frequencies, P 0 = (2n+1)πT , n ∈ Z. We treat separately the "vacuum" and the "matter" contributions, Π μν (K) = Π vac μν (K) + Π mat μν (K). The vacuum contribution has the generic form the virtuality dependence of Π(K 2 ) being given at one loop by The vacuum polarization Π(K 2 ) contains a logarithmic divergence which cancels in the subtraction above. Note however that the form (32), which follows from Lorentz invariance and current conservation, implies that the spatially transverse part, e.g., Π vac 11 (K) with K = (k 0 , 0, 0, k), vanishes for k 0 = ±ik.
For the matter part, there are two independent scalar functions to be calculated. For two sets of components, we can write [35] Π mat with X = 00 or X = μμ and Log, (35) Log, Log ≡ log In eq. (34), the operation Re(f (k 0 )) should be interpreted as the average (f (k 0 ) + f (−k 0 ))/2. From these components, the linear combination of interest can be expressed as In particular, setting k = ik 0 , we obtain a result independent of k 0 , Thus in the free theory, both sides of eq. (28) vanish. We note that for general mass m, the expression above has an interpretation as the vacuum-subtracted thermal chiral condensate, It would be interesting to see whether the relation between H E (k 0 ) and the chiral condensate generalizes to the interacting theory. It is important to note that the limits k 0 → 0 and k → 0 do not commute. In particular, differs from lim k0→0 H E (k 0 ).
Large-N c N = 4 SYM at infinite 't Hooft coupling
The retarded correlator of two vector currents in the large-N c N = 4 super-Yang-Mills theory at infinite 't Hooft coupling λ can be computed using the AdS/CFT realtime prescription [7]. For the purpose of testing the dispersive representation (28) in an interacting theory, we have obtained both the real and the imaginary part of the retarded correlator by solving ordinary differential equations [36,37] for the longitudinal and transverse electric field. The imaginary part of the retarded correlator for light-like momenta has an analytic solution [37] in terms of the hypergeometric function 2 F 1 , Here we find numerically that so that the dispersion relation (20) can be written in the simpler form or in the Euclidean version In the following, for the purpose of performing numerical checks, we divide both sides by the static susceptibility , thus rendering them dimensionless. -We checked eq. (46) by computing the left-hand side numerically from the differential equations and finding (−1.343, −2.230) for the cases n = 1 and n = 2; on the other hand, computing the right-hand side integral numerically using eq. (43), we obtain the same answer to the quoted number of digits. -Similarly, we checked eq. (45) in the same way and find, for ω = πT , −0.2543(1) for the right-hand side (using the standard -prescription to perform the principalvalue integral), and −0.2544 (1) for the left-hand side.
-We have also checked the dispersive representation at fixed spatial momentum, eq. (13). For instance, at ω = πT , the right-hand side value of −0.2544(1) is reproduced by inserting the spectral function ρ(ω , k = πT ) numerically determined for all ω into the integral, for which we obtain −0.252 (3). In this case the precision of the test is only at the one-percent level, owing to the cumulated uncertainty of inserting a numerically determined function into a principal-value integral.
Conclusion
We have derived and tested a dispersion relation at fixed virtuality for Euclidean correlators at imaginary spatial momentum in terms of the photon emission rate of thermal QCD matter, eqs. (1), (2). The dispersive variable is the photon energy in the rest frame of the fluid. We have mainly the photon rate of the quark-gluon plasma in mind, but the relation applies equally well to the lowtemperature phase of QCD. Dispersion relations formulated at fixed spatial momentum (e.g., eq. (13)) only exploit the non-relativistic version of causality, namely the fact that the retarded correlator (3) is analytic for Im(ω) > 0 for any fixed spatial momentum k; those at fixed virtuality K 2 = 0 make use of the stronger relativistic causality property, see eq. (15). It remains to be seen in practice how well the Euclidean correlators at imaginary spatial momentum can be controlled a) at long distances, where perhaps a larger spatial extent in the direction of the momentum will have to be used; and b) at short distances, where a subtraction of the Lorentz-symmetry breaking effects of the lattice must be performed, as described in sect. 3.2. But if these correlators can be determined reliably, comparisons with theory predictions for σ(ω) promise to be extremely interesting. At temperatures typical of heavy-ion collisions, the predictions of models not relying on a weak-coupling expansion can be tested. At sufficiently high temperatures, the weak-coupling calculations of σ(ω), which involve sophisticated resummations [5,6,33], should correctly predict the left-hand side of eq. (1). Also, if one attempts a numerical inversion of the dispersion relation (1) for σ(ω)/ω, an important feature of the latter function is that it is expected to have a very mild ω dependence in the quarkgluon plasma (see, e.g., fig. 11 in [27]), except at weak coupling when one enters the ω D −1 regime, where D is the diffusion coefficient; however, the Lorentzian kernel appearing in (1) suppresses this region. This feature is helpful in reconstructing σ(ω) for ω ≈ 2πT , while it means that the soft-photon emission rate remains difficult to probe using Euclidean correlators, a point that was already noticed [15,27] in the fixed-k dispersion relation. The regime of photons with frequency ω ∼ D −1 also presents difficulties in weak-coupling calculationssee, e.g., the discussion in [37]. For all these reasons, a comparison with weak-coupling predictions is best carried out at ω ≈ 2πT .
Finally, it is likely that Euclidean correlators at imaginary momentum have further interesting applications in lattice QCD, including in zero-temperature physics. | 6,720 | 2018-11-01T00:00:00.000 | [
"Physics"
] |
A probabilistic method for identifying rare variants underlying complex traits
Background Identifying the genetic variants that contribute to disease susceptibilities is important both for developing methodologies and for studying complex diseases in molecular biology. It has been demonstrated that the spectrum of minor allelic frequencies (MAFs) of risk genetic variants ranges from common to rare. Although association studies are shifting to incorporate rare variants (RVs) affecting complex traits, existing approaches do not show a high degree of success, and more efforts should be considered. Results In this article, we focus on detecting associations between multiple rare variants and traits. Similar to RareCover, a widely used approach, we assume that variants located close to each other tend to have similar impacts on traits. Therefore, we introduce elevated regions and background regions, where the elevated regions are considered to have a higher chance of harboring causal variants. We propose a hidden Markov random field (HMRF) model to select a set of rare variants that potentially underlie the phenotype, and then, a statistical test is applied. Thus, the association analysis can be achieved without pre-selection by experts. In our model, each variant has two hidden states that represent the causal/non-causal status and the region status. In addition, two Bayesian processes are used to compare and estimate the genotype, phenotype and model parameters. We compare our approach to the three current methods using different types of datasets, and though these are simulation experiments, our approach has higher statistical power than the other methods. The software package, RareProb and the simulation datasets are available at: http://www.engr.uconn.edu/~jiw09003.
Introduction
In most existing genetic variant association studies, "common trait, common variants", which asserts that common genetic variants contribute to most of traits (disease susceptibilities), serves as the central assumption. Researchers have successfully identified some significant associations between common single nucleotide polymorphisms (SNPs) and disease traits [1]. However, despite the enormous efforts expended on association studies of complex traits, common genetic variants only show a moderate influence on different phenotypes in many reported disease associations and consequently have limited diagnostic value [2,3]. While the identification of common variants creates a dilemma, known as "common trait, rare variants", an alternative hypothesis, which asserts that multiple rare variants with moderate to high penetrances may collectively influence disease susceptibilities, has been suggested in some literatures [3][4][5]. Rare variants are defined as those whose minor allele frequencies (MAF) are less than or equal to 0.01 (≤ 10 -2 ). Although some rare variants associated with Mendelian diseases have been identified, more often, the allelic population attributable risk (PAR), which describes a small reduction in the incidence that would be observed in unexposed samples compared to the actual exposure pattern, is low. The odds ratio (OR), a measure of the strength of association or non-independence between two binary data values, is also low. Moreover, based on the "common trait, rare variants" hypothesis, in many cases, a set of rare variants, instead of just one variant, should be identified to fully explain the genetic influence. Both the single-variant test [6] and the multiple-variant test [7] have been applied to rare variant association studies. However, due to the reasons outlined above, neither of them shows satisfactory power in obtaining associations. Although more and more attention is being focused upon rare variants, there has only been limited success thus far [8][9][10].
Alternatively, the collapsing strategy, also called the "burden-based test", is another approach for rare variants association studies. Most of the collapse-based approaches build on the "recessive-set" genetic model, in which the predisposing haplotype contains mutation(s) in at least one variant [11]. Multiple rare variants in the same locus are collapsed, based on different standards, then statistical tests are applied. The locus here is defined as a selected region that consists of a group of candidate rare variants [9,[12][13][14]. However, it is argued that existing collapsebased approaches assume all rare variants implicitly influencing the phenotype in the same direction and with the same magnitude [10,15]. Researchers have observed that any given rare variant could have no effect, could be causal, or could be protective for the endpoints (traits) [15]. For example, some low-frequency variants in African Americans PCSK9 can have a substantial effect on serum Low-Density Lipoprotein Cholesterol (LDL-C) by increasing the risk of or protecting against myocardial infarction [16][17][18].
Collapse-based approaches have low statistical powers when "causal", "neutral" and "protective" variants are combined [13,15,19]. To overcome this weakness, some approaches [9,14] assume that the rare variants are well selected by experts, while weighting of each variant is another widely used strategy [9,11,14]. In a recent study, Bhatia and others [19] suggest the development of a "model-free" approach, RareCover, that only collapses a subset of potentially causal variants from all of the given variants. Here, the "model" refers to the genetic association model that consists of the pre-selection candidate variants.
Motivated by RareCover, in this article, we focus on rare variant association analysis without any pre-selection of candidate variants. We propose a probabilistic approach, RareProb, to make selections using a Markov random field (MRF) model and identify multiple causal rare variants that influence a dichotomous phenotype using statistical tests. Our approach considers both the causal and the protective variants, which distinguishes it from the previous study RareCover, and it is therefore a robust predictor of the direction and the magnitude of the genetic effects. Moreover, inspired by the weight-sum approaches [9,11,14], we also weight each variant; however, we not only consider the likelihood of a variant being causal but also compute the pair-wise likelihood of candidate variants being collapsed together. Note that although it is difficult to observe, relatively low interactions (e.g., linkage disequilibrium) are expected between rare variants [4,11,13,20]. Furthermore, in regression-based association methods, genetic similarities are often used to reduce the dimensions of the regression models. Therefore, we introduced two kinds of genetic regions, the elevated region and the background region, in our model analysis; the elevated region has a higher probability of harboring a causal variant. This assumption that the causal variants are often located close to each other is often used, e.g. slide windows in RareCover [19]. However, the regions are more flexible than a preset slide window, as in RareCover.
We adopt the "dominant" and "recessive set" genetic model, which are also used in [9][10][11]14,15,19]. In the dominant and recessive-set model, the predisposing genotype harbors the mutation(s) in at least one variant on any of the two haplotypes. Therefore, for one genotype, there are two possible allelic values at each variant: one denotes that both haplotypes carry a wide-type allele, while the other denotes that at least one haplotype carries a mutant. In our method, each variant has two hidden states, causal/non-causal status and elevated/background region status. The MRF includes the hidden states, emission probabilities and transition probabilities. The emission probabilities bridge the hidden states and the genotypes, while the transition probabilities link the two hidden states. Following the pseudo-likelihood estimation method [21], we infer the model parameters and all of the hidden states. The simulation experiments show that our approach outperforms RareCover, RWAS [14] and LRT [9] on different parametric settings. In particular, RareProb obtains better results on large-scale data.
Notions and model overview
Suppose we are given M rare variants (allelic sites) on a set of N genotypes. Let s i denote the allelic value of the site s on the genotype i (1 ≤ i ≤ N, 1 ≤ s ≤ M), where s i = 0 means both haplotypes of i have the wild type allele, while s i = 1 means at least one haplotype has a mutant allele. Each genotype carries a dichotomous phenotype. Let vector P denotes the phenotypes, where P i = 1 represents that i is affected by the phenotype trait (being a case), while P i = 0 represents that i is a control.
The core of our approach is a Markov random field (MRF) model. We first introduce four key components of modeling this MRF: • The observed data of this MRF consist of all of the genotypes and phenotypes.
• There are two unknown states for each site: one is the causal or non-causal status and the other is the region location status. Here, we define them as the hidden states of this Markov random field. Let a latent vector R represent the region status, where R s = 1 denotes that the site s is located in an elevated region, while R s = 0 denotes the s is located in a background region. Additionally, let a latent vector X represent the causal/non-causal status, where X s = 1 if the site s is causal (contributes to the phenotype); otherwise, X s = 0. Probabilistic functions are designed to present the probabilities of each hidden state. The RareProb framework is able to incorporate prior information, obtained by different software tools, e.g. Align-GVGD [22] and SIFT [23], etc, by updating initial X vector and R vector. • A neighborhood system is required in the MRF model to describe the interactions among hidden states. Details of the hidden states and neighborhood system are shown in the section "Estimation of the transition probabilities in HMRF". • There are two kinds of probabilities in the MRF model: emission probabilities and transition probabilities. Emission probabilities bridge the relationships among genotypes, phenotypes and hidden states. Moreover, hidden states X and R are not independent of each other, as the relationships between the hidden states are described by the transition probabilities. The conditional probability P(X s = 1|R s = 1) denotes the probability that the site s is a causal site when it is located in an elevated region, while P(X s = 0|R s = 1) denotes the probability that the site s is non-causal when it is located in an elevated region. Similarly, another two conditional probabilities, P(X s = 1|R s = 0) and P(X s = 0|R s = 0), present the probabilities of being causal or non-causal if the site is located in a background region. Details of the emission probabilities are shown in the section "Estimation of the emission probabilities in HMRF", and the transition probabilities are shown in the section "Estimation of the transition probabilities in HMRF".
The central thesis of our approach is that causal rare variants, which should be collapsed together, are treated as one random vector variable with certain dimensions. Then, the probability of this bunch of causal rare variants becomes the probability of one variable being associated with the phenotype. Based on the Markov-Gibbs equivalence [21], the probability of this random variable can be decomposed into the sum of clique potentials. The firstorder clique potentials describe the probability of one variant being causal, while the second-order clique potentials measure the pair-wise genetic similarities, which share the idea of the kernel machine in regression frameworks [10,24,25]. The neighborhood system in the MRF model consists of clique potentials. In our approach, we select that the neighborhood system only contains the first-order and the second-order clique potentials because there is scanty evidence supporting the biological or medical scenario of high-order potentials. For each variable, the MAFs and model parameters can be estimated by maximizing the likelihoods of the genotypes. Then, the probability of the variable and the variable itself can be updated by MAFs and model parameters. Two or three iterations can be applied if needed for the convergence of the MRF. Thus, our approach selects a subset of candidate causal variants by updating the variables and avoids the weakness of the same magnitude effect assumption because the neighborhood system is able to describe both the "causal" and "protective" variants.
Estimation of the hidden states in HMRF Neighborhood system
Assume there are N/2 cases and N/2 controls among all of the genotypes (if the number of cases is not equal to the number of controls, then all of the results still can be used by applying noncentrality parameters). At a certain variant s, let θ s denote the MAF for the cases, and let the number of genotypes in cases that carry at least one mutant allele be c + s . Let r s denote the MAF for the controls, and let the number of genotypes in controls that carry at least one mutant allele be c − s . Then, we can draw two binomial distributions for the cases and the controls [9,14]: Thus, for a site s, the statistic of the difference between θ and r is is the estimation of r s . Similar to the linear kernel function, which calculates genetic similarities [10], we measure the likelihood between pairwise rare variants, which denotes how likely two variants would be collapsed together. For two variants s and s', we define ω s,s' as the likelihood of collapsing as follows: The ω function has the following properties: (1) When both s and s' are causal variants, due to the PAR, ω s,s' locates in the interval (0, 1]. (2) If one variant is "causal" but the other is "protective", the likelihood takes on a negative value. (3) The likelihood encourages the collapse of the variants with similar PAR. Those rare variants whose MAFs increase rapidly in some cases, as we mentioned before, could be identified by single-site tests or pair-wise tests, which are often not considered in collapsing models [8]. Let ω.,. be the weight of two neighbors. The closer the statistics z s and z s' are, the larger the likelihood will be. And thus, the neighborhood system is built up.
Hidden states
Rare variant s is either located in an elevated region or in a background region. Thus, we define the probability (Bayesian classifier) of s as where n (s) denotes the neighbors of s. g and h are two MRF parameters. g represents how strongly the status of X s affects the probability of X s , while h represents how strongly the neighbors of s affect the probability of X s . Here, we limit h >0, which encourages the pair-wise weights and prevents them from counteracting the negative weights. Thus, the joint probability of the latent vector . As the variants in different subsets (different collapsing groups) are conditional independent, this joint probability covers all of the probabilities of the random variables (collapsing groups). Similarly, the probability of s located in an elevated region can be represented by and the joint probability of latent vector R can be represented by p(R; R ) ∝ exp τ M s R + υ s,s ω s,s R s , where Φ R = (τ, υ)·τ and υ are two MRF parameters. We also limit υ >0, which encourages the pair-wise weights and prevents them from counteracting the negative weights.
Estimation of the emission probabilities in HMRF
We now estimate the emission probabilities to relate X and R with the observed data. As linkage disequilibrium is rarely observed between rare variants [8], the vector consists of the allelic values from one variant that is conditionally independent from the others, when a particular X is given. Thus, the joint conditional probability of all of the genotypes is If X s = 1, due to the PAR, θ s ≠ r s . We place a prior distribution on θ s and a prior on r s [26]: where a(·) and b(·) are hyper-parameters in the prior distributions [26][27][28]. Then, the marginal distribution of c + s is The marginal distribution of c − s is similar. The probability of the observed genotypes on s is equal to the sum of Thus we have: On the other hand, if X s = 0, then there is no PAR between θ s and r s that infers θ s = r s . Here, we simply use r s to denote the MAF of s for both the cases and controls. Thus, we have We have now obtained all the three emission probabilities of this HMRF: p (Y|X), P (Y s |X s = 0) and P (Y s |X s = 1).
Estimation of the transition probabilities in HMRF
The transition probabilities link the hidden states X and R. Let c + X be the counts of the causal variants on all of the elevated regions, and let c E be the number of variants in those regions. Let c − X be the counts of the causal variants on all of the background regions, and c B be the number of variants in those regions. Then, we draw two binomial distributions: c + X ∼ Bin(c E , ξ ); c − X ∼ Bin(c B , ζ ) where ξ = P (X = 1|R = 1) and ζ = P (X = 1|R = 0). We also place the prior distributions on ξ and ζ, as follows: where ξ = P (X = 1|R = 1) and ζ = P (X = 1|R = 0). We also place the prior distributions on ξ and ζ, as follows: where a(·) and b(·) are also hyper-parameters.
Thus, we have the conditional probability of X given R: and the posterior distribution of ξ given c + X is Similarly, the posterior distribution of ζ given c − X is Thus far, we have obtained all of the three transition probabilities of this HMRF: p (X|R), π (ξ |c + X ) and π (ζ |c − X ).
Estimation the model parameters
Based on the Gibbs-Markov Equivalence [21], a pseudolikelihood estimation cycle can be applied to this hidden MRF to estimate the model parameters and update the hidden states. We use the pseudo-likelihood estimation because p (X; Φ) and p (R; Φ R ) are difficult to compute directly. The algorithm involves the following four steps: Updateξ andζ by maximizing the transition probabilities π (ξ |c + X ) and π (ξ |c + X ), respectively. • Step 3: Estimate Φ and Φ R withˆ andˆ R by maximizing the pseudo-likelihood functions: and L R ; R .
• Step 4: UpdateX andR by P X s |Y,X S/s ∝ f Y s |X s ;θ ,ρ p s X s |X n(s) ;ˆ and P R s |X,R S/s .
There are several ways to exit from this iteration. We measure the Euclidean distance between the current and the updatedX . If the distance is less than a preset threshold, our approach will stop the iteration. After the convergence of HMRF, we obtain the estimations of X and R, as well as the MAFs for every variant. The collapsed rare variants can be tested based on the existing statistics, e.g. in [9,10,14].
Experiments and results
In this section, we apply our approach on a real dataset from [30] and also compare it with three other approaches using different types of simulated datasets. The three comparison approaches are RareCover, which is based on [19], RWAS from [14] and LRT from [9]. Additionally, it seems that RareCover is not released online, so as in many previous works, we re-implement this algorithm and the related statistics by ourselves.
Simulation frameworks
As the simulation settings in different papers [9,14,19] are quite different, we adopt all of them and generate three types of simulated datasets. In the first one, each dataset has a fixed number of causal variants, while in the second dataset, the number of causal variants is determined by allelic population attributable risk (PAR). The last simulation method first generates elevated regions and background regions and then plants causal variants in each region. We describe the three simulation methods in the following sections.
Fix number of causal variants
First, we generate the datasets with fixed numbers of causal variants, following previous approaches [14] and [9]. Each variant is generated independently because they believe that rare variants do not show significant linkage disequilibrium [9,14]. For each variant, the probability distribution of the MAF of site s on controls, r s , satisfies the Wright's distribution under purifying selection [4], where s is the selection coefficient, b S is the probability that the normal allelic site mutates to the causal variant, and b N is the probability that a causal variant repairs to a normal variant. We take s = 12.0, b S = 0.001 and b N = 0.00033, which are the same settings used by [9,11,14].
Then, the relative risk of s is: RR = δ (1−δ)ρ s + 1, where δ is the marginal PAR. The marginal PAR is equal to the group PAR (Δ) divided by the number of causal variants, while the relative risk of M variants is 1 [14]. Afterwards, the MAF of s for the cases is calculated according to θ s = RR×ρ s (RR−1)ρ s +1 . In each dataset, we simulate N = 2000 genotypes with half cases and half controls. The mutations on the cases and the controls are sampled independently according to θ s and r s , respectively.
Causal variants depends on PAR
The second way generates a set, C, that contains all of the causal variants. Instead of a fixed number, the total number of causal variants depends on PAR [19], which is limited by Δ (the group PAR): where Pr represents the penetrance of the group of causal variants and P D is the disease prevalence in the population. Different settings are applied in the experiments.
We use the algorithm proposed in [19] to obtain the MAF of each causal variant. The algorithm samples the MAF of a causal variant s, θ s , from the Wright's distribution with s = 30.0, b S = 0.2 and b N = 0.002 [4,19], and then appends s to C. Next, the algorithm checks whether is true. If the inequality does not hold, the algorithm terminates and outputs C. Thus, we obtain all of the causal variants and their MAFs. If the inequality holds, then the algorithm continuously samples the MAF of the next causal variant. The mutations on genotypes are sampled according to θ s . For those non-causal variants, we use Fu's model [29] of allelic distributions on a coalescent, which is the same used in [19]. We adopt ρ s = 5.0 N . The mutations on genotypes are sampled according to r s . The phenotype of each individual (genotype) is computed by the penetrance of the subset, Pr. Thereafter, we sample 1000 of the cases and 1000 of the controls.
Causal variants depends on regions
There are many ways to generate a dataset with regions. The simplest way is to preset the elevated regions and the background regions and to plant causal variants based on certain probabilities. An alternate way creates the regions by a Markov chain. For each site, there are two groups of states. The E state denotes that the variant is located in an elevated region, while the B state denotes that the variant is located in a background region. Both states E and B can transfer to a causal state C or a noncausal stateC . If the Markov chain travels to theC state, it plants a mutant on the genotype with probability r. If the variant is considered to be causal, it may continuously transfer to the state A, which means that the genotype carries a mutant that may affect the phenotype with penetrance Pr. Otherwise it arrives in the state Ā, and the Markov chain plants a mutant or a wild-type allele on the genotype afterwards.
To generate enough genotypes, we perform the following steps for each variant: if the process drops intoC , we take 50,000 iterations to yield a mutant, where r is sampled from the Wright's distribution with s = 30.0, b S = 0.2 and b N = 0.002. If it drops into A or Ā, we design an iteration to C until it reaches 50,000 iterations. The transition probability from C to A is equal to r × Pr. After we have enough genotypes, we sample 1000 cases and 1000 controls from them.
Comparisons on power
Similar to the measurements in [9,14], the power of an approach is measured by the number of significant datasets, among many datasets, using a significance threshold of 2.5 × 10 -6 based on the Bonferroni correction assuming 20000 genes, genome-wide. We test at most 1000 datasets for each comparison experiment.
Power versus different proportions of causal variants
We compare the powers under different sizes of total variants. In the first group of experiments, we include 50 causal variants and vary the total number of variants from 100 to 5000. Thus, the proportions of causal variants decrease from 50% to 1%. In the second group of experiments, we hold the group PAR as 5% and vary the total number of variants as before. The results are compared in Table 1. From the results, our approach clearly shows more powerful and more robust at dealing with large-scale data. We also test our approach on different settings of the group PARs. Those results can be found in Table S1 in the Additional file 1.
The Type I error rate is another important measurement for estimating an approach. To compute the Type I error rate, we apply the same technique as [19]. Type I error rate is defined as the probability of a non-causal variant being selected in the potential causal set. We compare our approach only with RareCover because RWAS or LRT does not select any potential causal variants. The results on different configurations can be found in supplementary documents. Based on the results, our approach always holds reasonable Type I error rates. Although on some configurations RareProb has a little higher Type I error rates, e.g. 1%-10% higher when gourp PAR is 5%, than RareCover, the absolute values are still satisfied. Moreover, when the group PAR decreases, RareProb always performs lower Type I error rates than RareCover. These results can be found in Table S2 in the Additional file 2. Considering both statistical power and Type I error rate, the advantage of RareProb cannot be neglected: it is able to identify most of the causal variants with an acceptable Type I error rate. In the other words, if an approach rarely identifies correct variants, a low Type I error rate becomes meaningless.
Power versus different configurations of regions
We compare the powers on different configurations of elevated regions and background regions and test the performance of our approach in identifying the regions. At each total variant number, we preset the number of regions between 2 and 8, with half elevated regions and half background regions. In these datasets, the probability of a rare variant being causal is 0.1 if the variant is located in an elevated region; otherwise, the probability is 0.001 if variant is located in a background region. In the last group of experiments, the regions are generated by the Markov chain, where the transition probability of remaining in the same regions (keeps in elevated region or background region) is 0.8, while the transition probability of transitioning between different regions (jumps from an elevated region to a background region, or jumps from a background region to an elevated region) is 0.2. The emission probabilities are the same as before.
We test the powers and record the percentages of correct identifications on the regions. The results are listed in Table 2. The results show that our approach successfully estimates the regions, while RareCover suffers difficulty on identifying neither candidate causal variants nor region information. We also test our approach on total variants being 3000, 4000 and 5000. These results can be found in Table S3 in the Additional file 3.
RareProb on real mutation screening data
Finally, we apply our approach to a real mutation screening dataset. This dataset has been previously published by [30]. Authors screen for a susceptibility gene, ATM, which is thought to associate with ataxia telangiectasia. ATM is also an intermediate-risk susceptibility gene for breast cancer [9,14]. The dataset (ATM_CCMSdata_-Dec2011_v1) we have consists of 121 rare variants in a set of 2506 cases and 2235 controls, which is called "bona fide case-control studies" [9,14]. We apply RareProb to this dataset without any prior information. RareProb identifies variant #c.4424A >G as a causal variant and reports a significant association with a p-value of 8.8817 × 10 -16 . As a comparison, authors in [30] reports that they did identify a significant association with the help of the prior information, but that they did not find a significant association only according to the results of CMC. Sul and others [14] applied RWAS and reports a non-significant association with p-value of 0.3946 without prior information and a non-significant association with pvalue of 0.0078 and 0.0881 when prior information of variants is obtained by Align-GVGD [22] and SIFT [23], respectively. Sul and others [9] also applied LRT and reports that a non-significant association with p-value of 0.3934 was found without prior information, but a significant association with p-value of 0.0058 and 0.08384 were found introducing Align-GVGD scores and SIFT scores, respectively. Our approach successfully identifies an association and clearly points out the candidate causal variant, without prior information, while either RWAS or LRT cannot achieve this.
Conclusion
In this article, we propose a probabilistic method, Rare-Prob, to identify multiple rare variants that contribute to dichotomous disease susceptibility. Our approach is inspired by RareCover. Both approaches select a subset of The column "Causal" represents the total number of causal variants, "Region" denotes the total number of elevated regions, "Length" indicates the total number of variants locating in elevated regions. The column "Correct R" shows the percentage of correct identification of regions.
potentially causal variants from the given variants, which means our approach does not rely on the pre-selection of candidate rare variants. Furthermore, as opposed to simply merging the variants in RareCover, our approach gains power by considering the directions and the magnitudes of the genetic effects. Both the causal and the protective variants can be described by pair-wise measurements, respectively. This method gets rid of the weakness of losing statistical power when "causal", "neutral" and "protective" variants are combined. Note that the pair-wise weight is not the linkage disequilibrium (LD). LD is quite difficult to observe, although it is expected among rare variants. The pair-wise measurements indicate the likelihood of two variants being collapsed, which is similar to the kernel functions in regression-based frameworks. This weight is then used to build up the neighborhood system of the hidden Markov random field model. The Markov random field model treats all of the variants as one vector and estimates their causal/non-causal status by globally maximizing the likelihood of genotypes instead of by local optimization. Our approach gains more power than existing group-wise collapsing approaches; RareProb filters out those variants with non-causal status. At the same time, unlike the previous selection-based approaches, RareProb controls the false positive rate by partitioning elevated regions and background regions, instead of by presetting any sliding windows. Regions are much more flexible than preset sliding windows. While existing approaches can only handle hundreds of variants, there is no doubt that the total number of variants will increase rapidly with the development of new technologies, e.g. applications of next generation sequencing. The simulation experiments show that our approach obtains significantly more power, especially when the total number of given rare variants is large. We also apply our approach to a real mutation screening dataset and a significant association is found. Our approach is able to handle thousands of variants. Moreover, our approach is easy to extend to an "additive" genetic model and multiple phenotypes by updating the Dirichlet prior distribution.
Additional material
Additional file 1: | 7,368.2 | 2013-01-21T00:00:00.000 | [
"Biology",
"Mathematics"
] |
Scoulerine affects microtubule structure, inhibits proliferation, arrests cell cycle and thus culminates in the apoptotic death of cancer cells
Scoulerine is an isoquinoline alkaloid, which indicated promising suppression of cancer cells growth. However, the mode of action (MOA) remained unclear. Cytotoxic and antiproliferative properties were determined in this study. Scoulerine reduces the mitochondrial dehydrogenases activity of the evaluated leukemic cells with IC50 values ranging from 2.7 to 6.5 µM. The xCELLigence system revealed that scoulerine exerted potent antiproliferative activity in lung, ovarian and breast carcinoma cell lines. Jurkat and MOLT-4 leukemic cells treated with scoulerine were decreased in proliferation and viability. Scoulerine acted to inhibit proliferation through inducing G2 or M-phase cell cycle arrest, which correlates well with the observed breakdown of the microtubule network, increased Chk1 Ser345, Chk2 Thr68 and mitotic H3 Ser10 phosphorylation. Scoulerine was able to activate apoptosis, as determined by p53 upregulation, increase caspase activity, Annexin V and TUNEL labeling. Results highlight the potent antiproliferative and proapoptotic function of scoulerine in cancer cells caused by its ability to interfere with the microtubule elements of the cytoskeleton, checkpoint kinase signaling and p53 proteins. This is the first study of the mechanism of scoulerine at cellular and molecular level. Scoulerine is a potent antimitotic compound and that it merits further investigation as an anticancer drug.
SCiEntiFiC RepoRts | (2018) 8:4829 | DOI: 10.1038/s41598-018-22862-0 made from reticuline and serves as the precursor in the biosynthetic pathway for berberine, stylopine, protopine and sanginarine 6,7 . An initial biological study described its significant in vitro antiplasmodial activity against the P. falciparum strains, TM4/8.2 (a wild type chloroquine and antifolate sensitive strain) and K1CB1 (multidrug resistant strain), with IC 50 values 1.78 µg/mL and 1.04 µg/mL, respectively. Regrettably, this activity does not meet the criteria stipulated under the Medicines for Malaria Venture 3 . Other research efforts, performed on rats, determined that scoulerine protects α-adrenoreceptors against irreversible blockade by phenoxybenzamine, inhibits [ 3 H]-inositol monophosphate formation caused by noradrenaline 8 and acts as a selective α 1D -adrenoreceptor antagonist without affecting the contraction of the rat aorta 9 . Scoulerine has also been reported to exhibit other useful pharmacological properties such as antiemetic, antitussive and antibacterial action 3 and has been found to have an affinity to the GABA receptors 2 . Interestingly, a pioneer cell culture study on this alkaloid described that scoulerine shows significant cytotoxic activity against A549 and HT-29 cancer cell lines. The authors imply that the cytotoxic potency of scoulerine is associated with its ability to stabilize the covalent topoisomerase I -DNA complex to promote the formation of single-strand DNA breaks 10 . It should be pointed out that the unique position of scoulerine in backbone arrangements during biosynthesis and its interesting biological activities already attracted our attention in two previous studies. Scoulerine was found to be active as an inhibitor of ß-site amyloid precursor protein cleaving enzyme 1 (BACE1), which is a very promising target for the treatment of Alzheimer's disease (AD) 5 . In our follow-up work, when considering forty-six isoquinoline alkaloids screened by MTT assay, scoulerine exhibited impressive cytostatic activity against gastrointestinal cancer cells 11 . Although our recent study demonstrated the bioactivity of scoulerine with an emphasis on the cytostatic action that may be of interest in cancer chemotherapy, further studies remain to be undertaken to better explore its anticancer potential. At present, this study provides a better investigation of the MOA of scoulerine at cellular and molecular level. In addition to that, the pro-apoptotic and cell cycle arrest activity in p53-deficient (Jurkat) and p53 wild-type (MOLT-4) leukemic cells following scoulerine treatment is determined. Finally, aiming at the further conceptual extension to study structure-cytotoxicity relationships, we have introduced three (2, 3 and 4) aliphatic derivates of scoulerine incorporating esters of carboxylic acids.
Scoulerine decreases proliferation of human cancer cells. Various leukemic cell lines and carcinoma
cell lines obtained from solid tumors were used for evaluating their sensitivity towards scoulerine. First, scoulerine (1) and its three semi-synthetic derivatives (2), (3) and (4) were evaluated at a single-dose exposure (10 µM) against a mini-panel of leukemic cell lines (MOLT-4, Jurkat, Raji, HL-60, U-937 and HEL 92.1.7) using the XTT metabolic activity assay ( Fig. 2A) and A2780 ovarian carcinoma cells using the xCELLigence system (Fig. 2B). In order to determine the IC 50 values, a broad concentration range of scoulerine were determined ( Supplementary Fig. 1). Considering the effect of aliphatic esters of scoulerine with varying carbon chain lengths, parent scoulerine proved to be the most active compound against the determined cell lines, with the IC 50 value ranging from 2.7 µM to 6.5 µM in human leukemic cells (Table 1). Further testing of the scoulerine effects on viability and proliferation revealed that in concentrations of 2.5, 5, 10, 15 and 20 µM scoulerine significantly reduced the viability and proliferation of Jurkat and MOLT-4 cells in a dose dependent manner within 24 h of treatment. Moreover, the reduction of cell viability was even more pronounced 48 h after scoulerine application ( Supplementary Fig. 2). Similar results were achieved using the dynamic real-time monitoring of proliferation by the xCELLigence system designated for adherent cell lines. In these experiments, the xCELLigence system was utilized to follow the proliferation of scoulerine and sham-treated cells over a 72 h period. As shown in Fig. 3, scoulerine (10, 20 and 50 µM) had a clear negative impact on human lung carcinoma (A549), ovarian carcinoma (A2780) and breast adenocarcinoma (SK-BR-3 and MCF-7) proliferation. Cells treated with lower doses (1 and 5 µM) of scoulerine experienced unaffected or slightly delayed proliferation.
Scoulerine induces MOLT-4 and Jurkat cells apoptosis.
In what concerns the pro-apoptotic activity of scoulerine in both Jurkat and MOLT-4 leukemic cell lines, this compound showed an apoptosis-inducing effect assayed by Annexin V and PI staining 24 h following the treatment. Soon after initiating apoptosis, the membrane phosphatidylserine was translocated from the inner layer of the plasma membrane to the outer leaflet. Quantification of Annexin V binding to externalized phosphatidylserine allowed us to clearly identify apoptotic cells between cell populations. Twenty-four hours after the application of 0, 2.5, 5, 10, 15 and 20 µM of scoulerine, the early apoptotic rates were 4%, 8%, 22%, 24%, 22% and 23% for Jurkat cells, and 3%, 4%, 9%, 16%, 14% and 13% for MOLT-4 cells, respectively. The late apoptotic (cells that have lost their membrane integrity also show PI staining) rates for Jurkat leukemic cells were 5%, 15%, 21%, 21%, 19% and 22%, respectively, and 5%, 14%, 20%, 27%, 26% and 28% for MOLT-4, respectively (Fig. 4). In follow-up experiments, the DNA fragmentation, which is a hallmark of apoptosis due to the activation of endonucleases, was quantified by TUNEL assay. In contrast Data are shown as mean values ± SD of at least three independent experiments. *Significantly different to control (P ≤ 0.05) (A). Dynamic real-time xCELLigence screen of proliferation and cytotoxicity over 62 h. The human A2780 ovarian carcinoma cells in the logarithmic growth phase were treated. The negative control cells were exposed to 0.1% DMSO (vehicle) and 5% DMSO was used as a positive control. The plot is representative of at least three experiments performed (B). Scoulerine activates caspase-3/7, -8 and -9 in a dose-dependent manner. In order to subsequently confirm apoptosis activation at a molecular level, we evaluated the activation of caspases-3/7, -8 and -9. Irrespective of which pathway of apoptosis is induced, caspases-3/7 are activated downstream of initiator caspases. Caspase-9 is activated upstream of caspases-3/7 and downstream of mitochondrial membrane permeabilization, so it indicates that the intrinsic pathway of apoptosis is being activated, while caspase-8 becomes induced after death-receptor stimulation, which is typical for extrinsically mediated apoptosis. In this case, exposure of Jurkat cells for 24 h to 2.5 and 5 µM of scoulerine resulted in considerable (P ≤ 0.05) activation of caspases-3/7 and -8, and a bit less pronounced (P ≤ 0.05) activation of caspase-9. At the 48 h interval, the activity of caspases-3/7, -8 and -9 was still statistically significant, but began to decline. On the contrary, where MOLT-4 leukemic cells were concerned, the activity of caspases-3/7, -8 and -9 was even more pronounced 48 h following the application of 2.5 and 5 µM of scoulerine, compared to the same after 24 h, with an overall higher level of activity of caspases-3/7, -8 and -9 (Fig. 5).
Scoulerine acts to inhibit proliferation through inducing G2 or M cell cycle arrest. To further elucidate the mechanisms by which scoulerine exerts its growth-inhibitory activity, we investigated the effect of scoulerine on cell cycle progression in Jurkat and MOLT-4 leukemic cells 16 h after treatment. In the presence of 5 µM of scoulerine, the percentage of Jurkat cells in the G1phase and the S phase decreased from 45% and 31% (at mock-treated control) to 29% and 22%, respectively. Conversely, the proportion of cells in the G2/M phase increased from 24% (mock-treated control) to 49%. The rise in the percentage of Jurkat cells in the G2/M phase was even more eminent in the presence of 10, 15 and 20 µM of scoulerine. A similar trend was observed in MOLT-4 cells treated with the same amounts of scoulerine (Fig. 6). All in all, this result indicates scoulerine-induced cell cycle arrest at the G2/M transition, which was determined in later experiments by phosphorylated histone H3 at Ser10 (H3 pSer10) quantification. Analysis of the percentage of cells halting proliferation in mitosis (M phase) measured as positive for H3 pSer10 staining using flow cytometry revealed that scoulerine-treated Jurkat (2.5-20 µM) and MOLT-4 (5-20 µM) cells for 24 h were significantly (P ≤ 0.05) arrested in mitosis. Together, these results indicate that scoulerine is able to induce cell cycle arrest in the G2 or M phase depending on the dose and the duration of the treatment (Fig. 7).
Scoulerine disrupts microtubule structure of A549 lung carcinoma cells. Since we observed significant G2 or M phase arrest and H3 pSer10 staining during flow cytometry cell cycle analysis, we explored whether scoulerine affects the microtubule structures of cells. To visualize this, lung carcinoma A549 cells were chosen as a model for indirect immunofluorescence with a monoclonal anti-β-tubulin antibody due to a lower nuclear-cytoplasmic ratio compared to leukemic cells. Additionally, A549 cells were sensitive to scoulerine and displayed strongly diminished proliferation during xCELLigence analysis at concentrations exceeding 10 µM. Epi-fluorescence microscopy performed with fixed A549 cells labeled with a monoclonal anti-β-tubulin antibody and DAPI counter stain revealed that the intact microtubules extended continuously through the cytoplasm in control cells. In contrast, scoulerine applied for 24 h at 10 and 20 µM obviously disrupted microtubule structure with a dense aggregation of microtubules around cell nuclei, being less dense at the cell periphery. Besides, A549 cells treated with scoulerine at 5 µM resisted the treatment with almost intact organized tubulin networks as seen in 0.1% DMSO exposed control cells. Nocodazole, the microtubule depolymerizer, was used as a positive control for the study. As is apparent from the microscopic images presented in Fig. 8, treatment with nocodazole at 5 µM resulted in the observation of expected disorganized and disassembled microtubule structures as compared to 0.1% DMSO control cells.
Scoulerine does not induce DNA strand breaks.
To assess DNA damage, the alkaline comet assay was performed. The treatment of Jurkat and MOLT-4 cells with scoulerine at 2.5 and 5 µM for 24 h, triggered apoptotic (secondary) DNA fragmentation, which was estimated by the formation of comet-like tails after single-cell gel electrophoresis. Briefly, in a 24 h interval, there was a significant increase in the tail moment after scoulerine application in both Jurkat and MOLT-4 leukemic cells. However, when we performed the comet assay at an earlier 12 h interval of treatment, no considerable direct DNA damage was found. This suggests that DNA damage is predominantly a consequence of internucleosomal cleavage associated with apoptotic cell death in response to scoulerine treatment (Fig. 9).
Scoulerine alters the levels of cell cycle checkpoint and apoptosis-related proteins. In order
to study the molecular mechanism by which scoulerine induces cell cycle arrest and apoptosis, we investigated the expression levels and the activation of several cell cycle regulatory and proapoptotic proteins. To perform this, Jurkat and MOLT-4 cells were treated with scoulerine at 2.5 and 5 µM for 24 h and whole cell lysates were prepared and used for Western blotting. Western blot analysis showed an upregulation of p53 protein in p53 wild-type MOLT-4 cells in the presence of 5 µM scoulerine. In Jurkat cells, checkpoint kinase 1 (Chk1) was activated through phosphorylation at Ser345 and checkpoint kinase 2 (Chk2) was activated through phosphorylation at Thr68 after scoulerine treatment. However, in both determined cell lines the expression levels of p21, which is also involved in the regulation of cell cycle progression, remained the same. Similarly, the level of phosphorylated p53 at Ser15, which increased after cisplatin exposure (positive control), remained unchanged after scoulerine exposure (Fig. 10).
Discussion
In our previous study we reported that scoulerine has a potent antiproliferative activity against Caco-2 and Hep-G2 cancer cells 11 . Another recent work in this direction showed that scoulerine isolated from the stems of Xylopia laevigata was cytotoxic toward the tumor cell lines B16-F10, HepG2, K562 and HL-60 12 . Encouraging results prompted us to investigate whether scoulerine can eliminate cancer cells via apoptosis and if the scoulerine-induced antiproliferative effect blocks cell cycle progression. Thus, in the work herein, we have investigated proliferation, cell cycle distribution, cell death, apoptosis induction, DNA damage, microtubule structure and the upregulation of selected DNA-damage response proteins following scoulerine treatment. We show that scoulerine had cytostatic activity in all of the leukemic and tumor lines investigated in a dose-dependent manner. Controversially, our results are in contrast with that reported by Khamis and colleagues. They determined only moderate cytotoxic activity of discretamine (scoulerine) with IC 50 over 3000 µM using four human breast cancer (MCF-7, MCF-7ADR, MDA-MB435 and MT-1) cell lines and MTT assay 13 . Here, however, scoulerine inhibited the proliferation of MCF-7 cells at 10 µM, as measured by means of the xCELLigence system in view of the cell-growth inhibition profile under real-time. To better understand antiproliferative potential of this naturally occurring alkaloid, derivatives of scoulerine were synthesized and assayed at 10 µM with respect to its activity on cell growth. The comparison of the semisynthetic derivatives (2), (3) and (4) with scoulerine indicated that esterification generally resulted in a reduction in the antiproliferative activity. Moreover, it seems that the potencies of the (2), (3) and (4) esters decreased with an increasing length of the carbon chain. Next, we observed that scoulerine treatment significantly downregulated the viability of Jurkat and MOLT-4 cells within 48 h from 5 μM and higher as assessed by using a Trypan blue exclusion test. Apoptosis induction associated with the translocation of phosphatidylserine to the outside of the cell and loss of plasma membrane integrity was quantified by flow cytometry. Jurkat and MOLT-4 cell cultures treated with scoulerine showed a presence of both early and late apoptotic cells, which increased in a dose-dependent manner. Apoptosis induction was further supported by measuring DNA fragmentation using TUNEL assay, by quantifying caspase activity and by detecting p53 expression in MOLT-4 cells. Collectively, scoulerine showed caspases-3/7, -8 and -9 activation, which suggests the involvement of both an extrinsic and intrinsic pathway of programmed cell death. Further analysis by flow cytometry revealed that scoulerine treatment led to G2 phase arrest in Jurkat and MOLT-4 leukemic cells followed by an increase in the percentage of cell population in mitosis as judged by PI and phospho-histone H3 double staining. During mitotic division, the tubulin-microtubule protein system underwent assembly, which, collectively, is an essential component for chromosome segregation. Where antimitotic compounds are concerned, this protein system is a suitable target. It is these antimitotic compounds that interfere with the microtubules function by inhibiting cell proliferation in mitosis 14 . Since uncontrolled and rapid cell division is a hallmark of cancer tumor growth, microtubule-interfering agents display a remarkable efficacy in rapidly proliferating cancer cells 15 . Therefore, we sought to elucidate, using immunofluorescence imaging, whether treatment with scoulerine affects the cytoskeletal microtubule architecture of cells. The epi-fluorescence images obtained, where A549 cells were exposed to 10 and 20 μM of scoulerine for 24 h and stained with an anti-β-tubulin antibody, revealed a disruption of the tubulin structure. These results support the notion that scoulerine is a mitotic poison, resulting in G2 or M arrest. Unlike drugs grouped as genotoxic anticancer agents, our results indicate that scoulerine does not interfere with DNA, thereby activating cancer cell death independently of exogenous-induced DNA damage. However, these findings oppose what has been reported by Cheng, who proposed that scoulerine's anticancer activity is associated with the formation of single-strand DNA breaks through the stabilization of the covalent topoisomerase I -DNA complex 10 . Thus, dissimilarly to the previous findings describing the DNA-damaging action of scoulerine, no significant DNA damage was found early at 12 h using the alkaline comet assay. However, as we expected, the later 24 h interval of treatment induced DNA fragmentation that resulted from an internucleosomal cleavage of DNA during scoulerine-induced apoptosis. Such single-or double-strand DNA breaks generated during apoptotic cell death were cross-verified by TUNEL at 24 h in both leukemic cell lines. Numerous reports have shown that replication stress and DNA damage activate critical cell-cycle checkpoint kinases Chk1 and Chk2 of the DNA damage response pathway 16,17 . Both Chk1 phosphorylated at Ser345 by Ataxia telangiectasia and Rad3-related (ATR) kinase and Chk2 phosphorylated at Thr68 by Ataxia telangiectasia mutated (ATM) kinase were determined after 24 h of treatment. These results imply that activated Chk1 and Chk2 are particularly important to pause cell cycle progression after scoulerine exposure.
Although scoulerine has been studied for several years, only few publications regarding its cytotoxic activity against mammalian cells exist. The most explored part is its biosynthesis in the plants of the Nocodazole, an antineoplastic agent that disrupts microtubule function by binding to tubulin was used as a reference compound in this assay. Scale bar: 10 µm. Experiments were performed in triplicate using epifluorescence microscopy. Photographs from representative chambers are shown. Compared with controls, thicker and denser microtubule bundles were evident in scoulerine-treated cells. subsequent benzo[c]phenanthridine, protopine and protoberberine alkaloids, sanguinarine, protopine and berberine, respectively 6,18 . Among these, it is important to point out that sanguinarine, protopine and berberine were reported to show various degrees of cytotoxicity and antiproliferative activity against cancer cells. Although sanguinarine did not change the proportion of mitotic cells, it inhibited tubulin polymerization leading to the disruption of the microtubule network 19 . Interestingly, protopine is able to target microtubule structures in living cells without affecting tubulin polymerization with cell cycle arrest at G2/M 20 . In the same manner, disruptions of the microtubule network were described in response to berberine treatment 21 . Even if this structurally related substance shares some mechanistic similarities, it should be noted that these compounds differ in growth inhibitory values and molecular actions, thus making them an attractive target for structure-activity relationship studies.
In summary, scoulerine was shown to be broadly cytostatic and cytotoxic in the range of micromolar concentrations. To further investigate the underlying MOA, we showed that scoulerine caused a pronounced accumulation of cells in the G2 or M phases of the cell cycle, increased phosphorylation of histone H3 at Ser10 and disruption of microtubule organization, which is consistent with the antimitotic mechanism of action. Additionally, scoulerine has shown molecular target activity that occurs during apoptosis, with a p53 protein increase, caspases-3/7, -8 and -9 activation, phosphatidylserine externalization and DNA fragmentation. Our findings suggested that scoulerine activated ATR and ATM kinase-dependent cell cycle checkpoint signaling followed by phosphorylation of Chk1 at Ser345 and Chk2 at Thr68 sites. To our best knowledge, this is the first example to show the potent antimitotic activity of scoulerine leading to microtubule disruption, cell cycle arrest and cell death via apoptosis induction in cancer cells. Altogether, our results identified isoquinoline alkaloid scoulerine as a potent microtubule targeting agent, and data on biochemical interactions playing an important role in the cytotoxic, antiproliferative and proapoptotic actions are sufficiently encouraging to warrant further research on its anticancer potential. General procedure for acylation of scoulerine. The 1.5 equiv. of the corresponding anhydride was added to a solution of scoulerine (1) in 3 mL of dry pyridine. The mixture was stirred at room temperature until disappearance of the starting material. Then, the solvent was evaporated and the residue was purified by preparative TLC using cHx:Et 2 NH 9:1 or cHx:To:Et 2 NH 45:45:10 to afford corresponding esters 2-4 22 . 149.2, 149.0, 138.1, 136.1, 132.9, 130.0, 128.1, 127.5, 126.3, 119.7, 112.2, 110.5, 58.6, 56.0, 55.9, 53.3, 51.1, 35.8, 29.4, 27.3, 27.2, 9.3
The purity of all compounds verified by NMR was ≥97%. Cytotoxicity screening using XTT assay. In order to determine cell viability of cells treated with scoulerine, alkaloid esters (a single-dose of 10 µM) or scoulerine in a broad concentration range we used a standard colorimetric method measuring a tetrazolium salt reduction via mitochondrial dehydrogenase activity. The cells were seeded at previously established optimal density in a 96-well plate. After 48 hours incubation cell viability was determined using Cell Proliferation Kit II (XTT, Roche, Germany) according to manufacturer's instructions. XTT-assay was conducted using 200 μl of volume and 100 μl of XTT-labeling mixture. Absorbance was then measured at 480 nm using a 96-multiwell microplate reader Tecan Infinite M200 (Tecan Group Ltd., Männedorf, Switzerland). Viability was calculated as described in the paper by Havelek and colleagues using the following formula: (%) viability = (A480sample − A480blank)/(A480control − A480blank) × 100, where A480 is the absorbance of utilized XTT formazan measured at 480 nm 23 . Data were analysed with GraphPad Prism 5 biostatistics (GraphPad Software, La Jolla, CA, USA) statistical software. Each value is the mean of at least three independent replicates of each condition.
Screening for antiproliferative activity using xCELLigence system. The Single Plate station was placed inside the incubator at 37 °C and 5% CO 2 . First, the optimal seeding concentration for experiments was optimized for each cell line. After seeding, the respective number of cells in 190 µL medium per each well of the E-plate 96, the proliferation, attachment and spreading of the cells were monitored every 30 minutes by the xCELLigence system. Approximately 24 h after seeding, when the cells were in the log growth phase, the cells were exposed in triplicates to 10 µL sterile deionized water containing scoulerine to obtain final concentrations 1-50 μM. Controls received sterile deionized water + DMSO with a final concentration of 0.1%. Cells treated with 5% DMSO were used as positive control. Growth curves were normalized to the time point of treatment. Evaluations were performed using xCELLigence 1.
Activity of caspases.
The induction of programmed cell death was determined by monitoring the activities of caspases 3/7, caspase 8 and caspase 9 by Caspase-Glo Assays (Promega, Madison, WI, USA) 24 and 48 h after treatment with 2.5 and 5 μM of scoulerine. Cells treated with 5 µM cisplatin were used as positive control. The assay provides a proluminogenic substrate in an optimized buffer system. The addition of a Caspase-Glo Reagent results in cell lysis, followed by caspase cleavage of the substrate and the generation of a luminescent signal. A total of 1 × 10 4 cells were seeded per well using a 96-well-plate format (Sigma-Aldrich, St. Louis, MO, USA). After treatment, the Caspase-Glo Assay Reagent was added to each well (50 μl/well) and incubated for 30 minutes before luminescence was measured using a Tecan Infinite M200 spectrometer (Tecan Group, Männedorf, Switzerland). Epi-fluorescence microscopy. For each condition, 250 000 cells were seeded in 2-well chamber slides SPL (SPL Life Sciences, Korea). After seeding (usually 24 h later), spent medium was replaced with fresh medium and the cells were treated with scoulerine at 5, 10 and 20 µM. Cells treated with 5 µM nocodazole were used as positive control. Following 24-h treatment the cells were fixed with 4% freshly prepared paraformaldehyde for 10 minutes | 5,600 | 2018-03-19T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Characterizing current structures in 3D hybrid-kinetic simulations of plasma turbulence
In space and astrophysical plasmas turbulence leads to the development of coherent structures characterized by a strong current density and important magnetic shears. Using hybrid-kinetic simulations of turbulence (3D with different energy injection scales) we investigate the development of these coherent structures and characterize their shape. First, we present different methods to estimate the overall shape of the 3D structure using local measurements, foreseeing application on satellite data. Then we study the local magnetic configuration inside and outside current peak regions, comparing the statistics in the two cases. Last, we compare the statistical properties of the local configuration obtained in simulations with the ones obtained analyzing an MMS dataset having similar plasma parameters. Thanks to our analysis, 1) we validate the possibility to study the overall shape of 3D structures using local methods, 2) we provide an overview of local magnetic configuration emerging in different turbulent regimes, 3) we show that our 3D-3V simulations can reproduce the structures that emerge in MMS data for the periods studied by Phan et al. (2018), Stawarz et al. (2019).
Introduction
Turbulent, magnetised plasmas permeate a wide range of space and astrophysical environments, and plasma turbulence naturally develops coherent structures characterized by high current density and strong magnetic shear. These features are indeed present in practically any turbulence simulation employing the most disparate plasma models and regimes (e.g., Zhdankin et al. 2013;Passot et al. 2014;Navarro et al. 2016;Zhdankin et al. 2017;Cerri et al. 2019;Comisso and Sironi 2019, and references therein), as well as routinely observed via in-situ measurements in space plasmas such as the solar wind and the near-Earth environment (e.g., Podesta 2017; Greco et al. 2018;Fadanelli et al. 2019;Pecora et al. 2019;Gingell et al. 2020;Khabarova et al. 2021, and references therein). The characterization of current structures in turbulent plasmas is of particular interest not only because magnetic reconnection and/or different dissipation processes can occur inside (or close to) these regions, thus enabling energy conversion and plasma heating (e.g., Gosling and Phan 2013;TenBarge and Howes 2013;Osman et al. 2014;Zhdankin et al. 2014;Navarro et al. 2016;Grošelj et al. 2017;Matthaeus et al. 2020;Agudelo Rueda et al. 2021, and references therein), but also because reconnection processes occurring within such structures can in turn feed back onto turbulence itslef by playing a major role in the scale-to-scale energy transfer (e.g., Carbone et al. 1990;Cerri and Califano 2017;Loureiro and Boldyrev 2017;Franci et al. 2017;Mallet et al. 2017;Camporeale et al. 2018;Dong et al. 2018;Vech et al. 2018;Papini et al. 2019). E-mail<EMAIL_ADDRESS>In order to determine the physical behavior of coherent current structures it is of foremost importance to understand how they manifest within the (turbulent) magnetic-field dynamics. This task can be separated into two main inquiries. 1) On the one hand, one must determine the current structure geometry by defining the three characteristic scale lengths of such structures, usually called thickness (the smallest), width (the medium one) and length (the biggest). Such definition of characteristic lengths was employed, for instance, by Zhdankin et al. (2013) to characterize the current structures emerging in "reduced-MHD" simulations of plasma turbulence. While in a simulation we are always in the conditions to precisely define the geometry of a current structure, this is less obvious when it comes to in-situ satellite data. A spacecraft can indeed cross a coherent current structure during its path, but it has no information about the overall geometry of it, and even a multi-spacecraft fleet can only measure the spatial variation of physical fields on scales which are in general much smaller than those of any coherent current structure. 2) On the other hand, it is also of key importance to understand how (and if) local magnetic-field configurations within the above mentioned current structures are systematically different from the magnetic configuration that belong to the rest of the (turbulent) environment. While such a characterization can be achieved by a number of procedures, from now on we will focus on the "Magnetic Configuration Analysis" (MCA) method proposed by Fadanelli et al. (2019). The MCA method consists of a modification of existing techniques that have been previously employed to investigate the local configuration of magnetic field (namely, the Magnetic Directional Derivative by Shi et al. (2005) and the Magnetic Rotational Analysis by Shen et al. (2007); see Shi et al. (2019) for a review of these different techniques). Con-Article number, page 1 of 20 arXiv:2107.14130v2 [physics.plasm-ph] 28 Oct 2021 A&A proofs: manuscript no. main trary to measures of current structure geometry, the analysis of magnetic configurations can be performed on data from plasma simulations as well as on data coming from multi-spacecraft missions. For instance, in Fadanelli et al. (2019) the authors apply the MCA technique on long intervals of data collected by the Magnetospheric Multiscale (MMS) mission (see Burch et al. (2016)) determining that it is possible to obtain statistics of local configurations developing in the outer magnetosphere, in the magnetosheath, and in the near-Earth solar wind.
In the first part of this work, we develop and describe three methods by which it is possible to estimate some aspects of a current structure geometry starting from local measurements. We test these methods on the structures emerging in 3D-3V Hybrid-Vlasov-Maxwell (HVM) simulations of plasma turbulence (three spatial dimensions plus three-dimensional velocity space, with kinetic ions and fluid electrons). By comparing the results obtained from these three local methods with those resulting from a non-local approach, we show that it is indeed possible to estimate the overall shape of a current structure by employing using only local measurements.
The second part of this work investigates the physical characteristics of current peak regions forming in three different threedimensional Hybrid-Vlasov (3D-3V HVM) simulations. In particular, we analyze the different features of magnetic configurations in the three different simulations and we make a comparison between them and those obtained analyzing the MMS observational data of plasma turbulence measured on December 9th 2016 Phan et al. (2018); Stawarz et al. (2019). Indeed, one simulation setup that we include in this work is a three-dimensional equivalent of the 2D setup employed by Califano et al. (2020), which proved capable of qualitatively reproducing the turbulent and reconnection regime observed during that period. This paper is organized as follows. In Sections 2 and 3 we present the main features of the HVM simulations of plasma turbulence that are employed here, including an overview of the turbulent spectra that we obtain in the different cases. In Section 4.1 we give a precise definition of "current structure" in our simulations and define two non-local/overall shape factors called planarity P and elongation E to better highlight the 3Dshape of a current structure. In the same Section we clarify what we mean by "magnetic configuration" and how MCA defines the two shape factors planarity P and elongation E to characterize any such configuration. Then, in Section 4.2 we present three different methods by which it is possible to convert purely local measures into an estimate of E and P of current structures, and show their effectiveness. In Section 5 we investigate the physical features of the current regions forming in the 3D simulations we make use here and perform, using our simulations, the analysis proposed by Fadanelli et al. (2019). By performing this analysis we can show that magnetic configurations inside the coherent current regions behave differently with respect to those in the rest of plasma. Moreover, we show how the statistical distributions of these quantities obtained by numerical simulations well reproduce the ones obtained by analyzing MMS data of December 9th 2016 Phan et al. (2018); Stawarz et al. (2019). Finally, we summarize the results obtained and their importance in Section 6.
Simulations
In this paper to investigate the emergence of (coherent) current structures and characterize their nature, we make use of kinetic numerical simulations of plasma turbulence performed with HVM model with kinetic ions and fluid neutralizing elec-trons with mass Valentini et al. (2007). The corresponding set of equations is normalized using the ion mass m i , the ion cyclotron frequency Ω ci , the Alfvén velocity v A and the ion skin depth d i = v A Ω −1 ci . As a result, the dimensionless electron skin depth is given by the electron-to-ion mass ratio, d e = √ m e /m i . The ion distribution function f i = f i (x, v, t) evolves following the Vlasov equation that in dimensionless units reads giving the number density n and the ion fluid velocity u i as moments of f i . Then, the electron fluid response is given by a generalized Ohm's law for the electric field E including the Hall, diamagnetic and electron-inertia effects: In the electron inertia term we approximate 1/n = 1, in normalized units, for the sake of computational simplicity. Furthermore, J = ∇ × B is the current density (neglecting the displacement current in the low-frequency regime). In equation (2) we assume an isothermal equation of state for the electron pressure, P e = nT 0e , with a given initial electron-to-ion temperature ratio T 0e /T 0i = 1. The initial ion distribution function is given by a Maxwellian distribution with corresponding uniform temperature. Finally, the evolution of the magnetic field B is given by the Faraday equation: We take an initial magnetic field given by a uniform background field, B 0 e z where B 0 = 1, with a superimposed smallamplitude 3D perturbation, δB = δB x e x + δB y e y + δB z e z , computed as the curl of the vector potential, δB = ∇ × δA. In particular, δA is given by a sum of sinusoidal modes with random phase in a limited wavevector interval corresponding to the largest wavelengths admitted by the numerical box. In Table 1 we list the simulation parameters of four different simulations as follows: the box spatial size, the corresponding number of grid points, the wave vector range of the initial perturbation, the root mean square amplitude of each component of the perturbed magnetic field and the ratio between the root mean square of the magnetic perturbation and the equilibrium field. We sample for all simulations the velocity space using 51 3 uniformly distributed grid point spanning [−5v th,i , 5v th,i ] in each direction, where v th,i = β i /2 v A is the initial ion thermal velocity and β i = 1. We set a reduced mass ratio m i /m e = 100.
In the following we will refer to the 3D-3V simulations (the first three listed in Table 1) using the following names: "SIM A", "SIM B-wf" (weak forcing scenario) and "SIM B-sf" (strong forcing scenario). We also make use as a reference for comparison of a 2D-3V hybrid Vlasov-Maxwell simulation, namely "SIM 2D" (last one in Table 1). The reasons behind the choice of simulation parameters are discussed in the following section, providing also a brief overview of turbulence evolution and properties. As summarized in Table 1 each simulation initially injects fluctuations' energy in a different wavenumber range, with different root-mean-square (rms) values of the initial magnetic fluctuations exploring a different amount of sub-ion-gyroradius scales. In particular, SIM B-wf and SIM B-sf inject energy only slightly above the ion characteristic scales, and are able to excite a wide range of kinetic scales before reaching their dissipation scale (i.e., resolving about a decade of clean sub-ion range turbulence before entering the dissipation-dominated scales). In fact, as done in Califano et al. (2020) for the 2D-3V case, the aim of both SIM B-wf and SIM-sf is to mimic the conditions possibly encountered in the Earth's magnetosheath, past the bow shock, where the occurrence of electron-only reconnection has been observed Phan et al. (2018); Stawarz et al. (2019). For completeness, we include a simulation (SIM A) where energy is injected at larger scales including part of the final MHD turbulent range and a simplified two-dimensional one (SIM 2D) covering an even larger range.
We recognize three phases in the temporal evolution of all simulated systems: (1) an initial phase, (2) a transition phase and (3) a fully developed turbulence. At the beginning (1) the energy injected at large scales starts to cascade towards smaller and smaller scales. In the transition phase (2) coherent structures, i.e. regions where the current density peaks (it usually happens at about one eddy-turnover time) start to form. At the end, a fully developed turbulent state (3) is reached, with a complex nonlinear dynamics with coherent structures continuously forming, merging and/or being destroyed. We will indicate the times at which the 3D simulations reach saturation (maximum of the root mean square of the current density) as t sat A (SIM A), t sat B−w f (SIM B-wf) and t sat B−s f (SIM B-sf); at these times turbulence is fully-developed (roughly speaking saturation times correspond to about 3 eddy-turnover times). From now on, unless explicitly stated otherwise, all the analysis will be carried out at saturation times.
In Figure 1 we show the reduced one-dimensional, kaveraged, magnetic energy spectra versus k ⊥ for the simulations listed in Table 1. The spectra, for visual purposes, have been normalized so that they overlap at k ⊥ d i ≈ 2. The power laws k −5/3 ⊥ and k −8/3 ⊥ are displayed as references for the MHD and kinetic range, respectively. At "fluid" perpendicular scales, in the range 0.1 ≤ k ⊥ d i ≤ 2, a power law close to (but slightly steeper than) −5/3 is visible in both SIM A and SIM 2D. The spectral slope evaluated at k ⊥ d i 1 for these two simulations (excluding the first modes, which are affected by the initial condition), is indeed in the range −1.9 α −1.8 (the actual value slightly depends on the exact wavenumber range on which we perform the fit). On the other hand, SIM B-wf and SIM B-sf do not include enough MHD scales to draw any conclusion regarding the emerging spectral slope for k ⊥ d i < 2. While a difference between the observed slope in SIM 2D and the expected −5/3 power law may be related to the reduced dimension that pre-vents to properly establish critical balance Schekochihin et al. (2009), this discrepancy is also observed in the spectrum emerging from the 3D case (SIM A, whose magnetic-field spectrum seems to agree very well with the corresponding spectrum of SIM 2D). As noticed also in previous 3D-3V simulations Cerri et al. (2019), whether this feature in SIM A is due to the limited extent of the MHD range or not is unclear, and will have to await further investigation by means of even larger 3D simulations. At "kinetic" scales in the range k ⊥ d i > 2, a power law fairly consistent with −8/3 clearly emerges only in SIM B-sf (a hint of such power law is visible also in SIM A, but the limited extent of the sub-ion range does not allow it to develop over a broad range of scales before the numerical dissipation takes over). Such a slope would be consistent with an "intermittency corrected" kinetic-Alfvén-wave cascade occurring at sub-ion scales Boldyrev and Perez (2012), and previously reported via 3D-3V hybrid-Vlasov simulations Cerri et al. (2017bCerri et al. ( , 2018. The other two simulations, SIM 2D and SIM B-wf, exhibit a power law steeper than −8/3. A magnetic-field spectrum close to k −3 ⊥ in the kinetic range was already observed in previous 2D hybrid-kinetic simulations (see e.g., Ref. Cerri et al. (2017a) and references therein), sometimes attributed to the effect of a reconnection-mediated twodimensional cascade at sub-ion scales Cerri and Califano (2017); Franci et al. (2017) (which, however, may continue to hold also in three spatial dimensions Loureiro and Boldyrev (2017); Mallet et al. (2017), although a definitive evidence of such regime has been elusive in 3D kinetic simulations performed thus far Cerri et al. (2019); see also Agudelo Rueda et al. (2021) for a more recent attempt). For what concerns SIM B-wf, on the other hand, a steeper spectrum may be attributed to a weaker cascade associated to the lower amplitude (with respect to SIM B-sf) of the injected fluctuations, rather than to dissipation effects associated to ion-heating and/or fluctuations' damping mechanisms in the β ≈ 1 regime considered here (see e.g., Refs. Told et al. (2015); Sulem et al. (2016); Arzamasskiy et al. (2019) and references therein). In fact, in SIM B-wf there is no evidence of a clear cascade developing along the direction parallel to B 0 (not shown here). However, a detailed analysis of fluctuations' spectral properties and associated ion-heating mechanisms (and how they change between different simulations) is beyond the scope of this work and will be reported elsewhere.
In Figure 2 we show for a) SIM A, b) SIM B-st, the 3D rendering of the current density magnitude in shaded isocontours (we remind the reader that SIM B-st employs a simulation box size that, compared to the domain of SIM A, is about one third in each direction). In blue the regions of grid points which overcome a specific threshold, defined in the next Section, on the current density J th . Fig. 1: For our four simulations one-dimensional k -averaged magnetic energy spectra versus k ⊥ , normalized so that they overlap at k ⊥ d i ≈ 2.
Definitions
In order to analyze the physics of current structures, we first define the procedure aimed at identifying a current structure as well as the parameters to be adopted in order to characterize their shape. Alongside with such definitions, we will also discuss the concept of "magnetic configuration" and the meaning of its shape, a concept which will be thoroughly applied in the following sections.
Similarly to what is done in Uritsky et al. (2010); Zhdankin et al. (2013), we adopt the formal definition of "current structure" as any connected region where the current-density magnitude, J ≡ |J|, is above a threshold value for all internal points, and the boundaries do not touch any other such zone. In particular, we stress that the specific definition of such threshold may be somewhat arbitrary. In this work, consistently with Uritsky et al. (2010) and Zhdankin et al. (2013), we choose a current threshold defined by J th = J 2 + 3σ(J 2 ), where σ(J 2 ) = J 4 − ( J 2 ) 2 and ... denotes spatial average over the whole simulation box. We want to remark that, by this procedure, any isolated point with above-threshold current density is not recognized as a current structure by its own, but it can be identified as part of some other current structure if the two are within a minimum distance of one grid-point from each other. For shorthand, we call "center" of a structure the point within the structure domain for which J is maximum.
To characterize the spatial extension (or "geometry") of a current structure, we introduce three characteristic lengths max , med and min . In particular, the "length" max indicates the longest among the structure's dimensions (this goes to infinity in 2D systems), the "thickness" min is the smallest dimension, and we call "width" med the intermediate characteristic length.
To determine these quantities, we first compute the eigenvectors of the Hessian matrix of the current density at the center of each structure. Then, we consider the plane perpendicular to the direction of least variation (which is the one associated to max ). In this particular plane we have a direction of strongest variation for the current density (associated to min ) and a direction of weakest variation (associated to med ). We define the thickness min as the maximum distance between two points with J ≥ J th along the direction of strongest variation. On the contrary the width med is the maximum distance between two points with J ≥ J th on the plane. Finally, the length max is defined as the maximal distance between two points belonging to the same structure without restricting to special planes. We note here that the definition of the thickness min is different from that of Zhdankin et al. (2013) where the use of the full width at half maximum of the interpolated profile of J is preferred. Here we decide to define the thickness using the maximum distance between two points with J ≥ J th along the direction of strongest variation, in order to remain coherent with the definition of width and length. In the 2D case we have only thickness and width, computed as above. In Appendix A we clarify what is the impact of slightly different definitions of min on our analysis.
Once that the "geometry" has been clarified, it is immediate to define also the "shape" of current structures. In a current structure, we call "shape" the quantity determined by two "structure shape factors" defined as: planarity (current str.) These two parameters measure the tendency of the current structure to squeeze toward some elongated form (E → 1) or to flatten (P → 1). In this way, we can represent each structure's shape in the plane E − P, and eventually classify shapes into categories. In particular, we adopt the following nomenclature for certain characteristic shapes: (i) "pseudo-spheres" (low E, low P), (ii) "cigars" (high E, low P), (iii) "pancakes" (low E, high P), (iv) "knife blades" (high E, high P). In Figure 3 we show a schematic representation of a current structure with its characteristics lengths: min , med and max . We note that the three quantities of thickness, width and length, here presented, are enough to define the shape of compact structures in 3D. In case the holes would appear in current structures, new parameters should be added as a filling fraction or the hole dimensions. Such upgrade will be investigated in future work. By "magnetic configuration" we mean the local characterization of the B field that can be inferred at any point in our system, even far from a current structure or its peak, by inspection of the (symmetric) tensor N ≡ [∇B] · [∇B] T /B 2 Fadanelli et al. (2019). Because of its symmetric nature, N can be defined in terms of its orthogonal eigenvectors and corresponding eigenvalues: we denote these eigenvalues by σ max , σ med , σ min . The quantities 1/ √ σ min , 1/ √ σ med , 1/ √ σ max can be understood as proportional to the three characteristic variation lengths. Here and elsewhere, when considering the eigenvalues, it is important to stress that they are only proportional to the characteristic variation lengths, not directly comparable, due to the presence of normalization factors. What is really significant is the ratio between eigenvalues, as we will see in the next. Indeed, taking the ratio between eigenvalues the normalization factors cancel each other out.
As with current structures, a "shape" can be defined for magnetic configurations. The idea is to deal with 1/ √ σ min , 1/ √ σ med and 1/ √ σ max in the exact same way we did with characteristic lengths, and define two "configuration shape factors" as follows: We note here that the "configuration shape factors" are local quantities, which can be computed at any spatial location, including points owning to a current structure. On the contrary the "structure shape factors" provide an overall/non-local picture of a current structure. As E and P do for current structures, E and P do measure the tendency of magnetic configurations toward elongation (if E → 1) or squashing into sheets (if P → 1). Similarly, the position of a configuration in the E -P plane allows to recognize the local shape of B in one of the following four categories: (i) "pseudo-sphere", (ii) "cigar", (iii) "pancake" or (iv) "knife-blade". We remind the reader that the work by Fadanelli et al. (2019) is focused only on satellite data and thus on local measurements, providing point by point value for E and P along spacecrafts' trajectories. On the contrary here, taking advantage of the 3D data of a simulation, we can also give an overall/non-local picture of current structures (via E and P).
The shape of current structures
The first goal of this Paper is to determine whether it is possible to estimate the "non-local" overall shape of a current structure by local measurements, i.e. if there is some procedure to convert purely local measurements of certain quantities into a reliable estimate of the "actual" elongation and planarity of a given structure. We point out here that we consider the estimate of E and P as given by our "non-local" overall method in equations (4) and (5) to be used as "reference values" for the estimates obtained via the following local methods: -"HJ" method. This approach is based on the Hessian matrix of J calculated at the center of each current structure. If we suppose that ratios of the eigenvalues of this matrix, h max , h med and h min , can reproduce the ratios between max , med and min , then the shape factors of the structure can be estimated as: where the subscript "cen" indicates that the values are calculated at the structure's center. This method, we note, closely resembles the one adopted by Servidio et al. (2009). -"NB" method. This approach aims at inferring the "structure shape factors", E and P, from the "magnetic-configuration shape factors", E and P, evaluated through the matrix N at the center of a given current structure: where the subscript "cen" indicates that the values are calculated at the structure's center only, and σ max , σ med , σ min are eigenvalues of the tensor N which evaluates the magnetic configuration. it is worth noticing that, while the N tensor is well defined everywhere, for the sake of comparison with the other methods we consider here only the value at the center assuming it as representative for the whole structure. However, to easily make a comparison with satellites data and/or to increase the dataset analyzed, one can apply this method even to points different from the central one. Since magnetic field and current density are strongly related (in our model, the current density is the curl of the magnetic field), we also expect E NB and P NB to resemble E HJ and P HJ . -"AV" method. This approach considers an average of the magnetic configurations occurring inside each structures. That is, shape parameters are estimated as: where ... str is the average over all points belonging to a single current structure. This procedure, we note, resembles the NB method but can be carried on without knowing where the center of a current structure is located.
Method comparison
Our goal here is to verify the accuracy of the different methods HJ, NB and AV in estimating E and P, that are obtained looking at the overall/non-local shape of the current structure and are thus considered our reference values. To establish how accurate the different methods detailed in Section 4.2 are, in this Section we apply them to SIM A, which employs the largest physical domain (and thus is the simulation developing the largest number of current structures to test). To this end, we compute elongation and planarity of every current structure, then compare them with their estimates obtained by the HJ method, NB method and AV method.
In Figure 4 we show how E and P are distributed when defined through max , med , and min (Equations 4 and 5), while in Figure 5 we report the results of the estimates from HJ, NB, and AV methods (i.e. distributions for E HJ and P HJ , E NB and P NB , E AV and P AV , respectively, alongside with their ratios to the reference "non-local" overall E and P values for each structure). The physical picture that emerges from Figure 4 is consistent with highly elongated current structures. Most of the occurrences are concentrated at elongation in the interval 0.8 − 0.95 and planarity between 0.5 and 0.8 (in particular, for mean, median and standard deviation see first two rows of Table 2) suggesting "knife blade" 3D configurations. The results from HJ and AV methods are in very good agreement with the overall occurrence distribution of E and P, as shown in the top-row panels of Figure 5. For what concerns method NB, it gives the same physical picture of the other local/non-local methods, with highly elongated and mostly planar current structures (this is particularly clear from panel (e)), but from panel (b) we also note a tendency to return higher values of planarity and elongation (indeed the distribution is a bit shifted towards the upper-right corner). The near equivalence of the three methods in evaluating the characteristic shape of the current structures is further confirmed in the bottom row of Figure 5. More specifically, Figure 5 (d) shows that the distribution of E/E HJ is peaked around 0.9, and that P/P HJ is peaked around ∼ 1, meaning that for the majority of current structures the true elongation and planarity are almost the same of the ones obtained using HJ method (for mean, median and standard deviation see 3rd and 4th rows Table 2). The same agreement is shown in Figure 5 (e) between the control values and the NB method estimates, as shown by mean, median and standard deviation reported in Table 2 (5th and 6th rows). Finally, Figure 5 (f) shows that the AV method performs even better as compared to other local methods and this is confirmed by the mean, median and standard deviation values reported in Table 2, 7th and 8th row.
Despite the overall picture provides a very good agreement between reference and local methods, we note the presence of a slightly but persistent underestimation of the planarity using the local methods with respect to the reference one, underlined by the mean values reported in Table 2. Despite this general effect, we deduce that by using local methods, and in particular those employing the N tensor, it is possible to correctly estimate the actual shape of current structures. The main advantage of using the N tensor is that it can be applied to any point and not necessarily on the center of current structures. Even more importantly, the AV method, which is also the one which performs the best, can be applied with slight modifications to satellite data, once that a proper current threshold is defined.
Investigating physical characteristics of current structures
In this section we investigate the physical features of the structures forming in the three different 3D simulations. We will investigate the magnetic configuration shape factors (planarity and elongation), the characteristic eigenvalues (proportional to the characteristic scale lengths) and the orientation of magnetic field and current density. We will use the N tensor and its eigenvalues, but without restricting our analysis to points owning to the current structures, as for NB and AV methods, thus we will essen- Fig. 4: Occurrence distributions of elongation E versus planarity P (as computed using the "non-local" overall method discussed in Section 4.1) for SIM A. tially apply the MCA method as described first in Fadanelli et al. (2019). The aim is to determine what are the statistical characteristics of magnetic configurations that emerge in our simulations and whether there is any appreciable difference between the configurations found inside and outside current structures. Thus, we will consider separately 1) all points belonging to current structures, 2) the same number of points as in the first case, but belonging to an uniform sampling of the simulation box. These last points are called "generic". This type of analysis further expands the methodology employed by Fadanelli et al. (2019) on satellite data, where, however, the statistical properties of magnetic configurations have been investigated without distinguishing current structures from the rest of the plasma. In particular, the analysis of "generic points" can be considered the equivalent of a continuous sampling of satellite data, as done by Fadanelli et al. (2019). We expect that the uniformly picked set would reproduce data collected by satellite along 1D trajectories. Indeed, in the uniform sampling we picked one point every three along each direction. Because the typical dimensions of a current structure are in average big enough to include more than three grid points in at least two directions, for each structure several points are collected, as it is the case for a continuous satellite sampling. Investigating the possible difference between an uniform sampling and synthetic 1D satellite trajectories is out of the scope of the present manuscript. Furthermore, we compare the above analysis performed on our 3D-3V simulations with an analogous analysis applied to two intervals of high-resolution (burst) data which have been collected as the MMS spacecraft encountered a turbulent magnetosheath region just downstream the bow shock. These same intervals were previously considered by Stawarz et al. (2019) for a complete analysis of turbulence. In order to be directly comparable with our simulations, a three-step selection on the abovementioned MMS data has been applied so that the magnetic configuration analysis has been performed only on those data points for which (i) the computation of N is precise enough that at least two eigenvalues are well determined, (ii) β ∈ [0.3, 3] and (iii) the resolution attained by MMS data is comparable with that of the numerical simulations (after these selections, roughly 20% of original data are kept; for details, see Appendix B).
Analysis of shape factors for magnetic configurations
In Figure 6 we show the occurrence distribution of E vs P in our set of 3D simulations, namely SIM A, SIM B-wf, and SIM B-sf (left, center, and right column, respectively): in the top row for generic simulation points (which can be considered as the equivalent of the continuous sampling in the methodology developed by Fadanelli et al. (2019)), in the bottom row only for points be-longing to what is identified as a "current structure" (i.e., those regions where the current density is above J th ).
Occurrence distributions which consider only points belonging to current structures (bottom row) or considering generic points (top row) look very different. In particular, if we consider generic points we get a distribution spread in planarity between ∼ 0.2 and ∼ 0.9 with a peak around ∼ 0.6 − 0.7. Instead, if we consider only points which belong to current structures we get an increase in planarity with the distribution peaking around ∼ 0.8 − 0.9 in the y-axes. The fact that the distributions of magnetic configurations obtained via generic simulation points does not provide a reliable estimate is even more evident when comparing the results for SIM A presented in Figure 6 with those obtained in Section 4.3 for the same simulation (i.e., see Figures 4 and 5(a-c)). We further point out that for SIM Bwf this discrepancy between the two distributions (viz., generic points vs only points belonging to current over-densities) seems to be particularly emphasized. In the following we will give the mean, median and standard deviation for the corresponding 1Ddistributions of planarity and elongation.
The statistical features of the magnetic configurations can be appreciated in Figure 7 where we show the 1D normalized occurrence distribution of E and P for the three simulations SIM A, SIM B-wf and SIM B-sf for generic points (magenta line) and for points belonging to current structures (black line). We have superposed to these distributions obtained from simulations, the ones obtained using MMS satellites data (dotted blue line) from the two high-resolution magnetosheath intervals analyzed in Stawarz et al. (2019) (see Appendix B for details).
As in the color-maps of Figure 6, the histograms in Figure 7 shows an appreciable statistical difference in planarity between generic points (magenta lines) and those located inside current structures (black lines). Indeed, if we consider generic points we obtain a P distributions less skewed towards P ≈ 1, with a peak around 0.6 − 0.8 (please, note the log-scale for the y-axis). On the contrary, the normalized histogram of P for points owning to current structures peaks around 0.8 − 1.0. For all three simulations the values of mean, median and standard deviation are reported in Table 3 (1st row for generic points and 2nd row for current structures) and turn out to be the same. For the elongation, the behavior changes less significantly when considering generic points or current structures. Indeed, the peaks of the occurrence distributions are always close to 1. Also in this case we report mean, median and standard deviation in Table 3 (3rd row for generic points and 4th row for current structures). Summarizing, the emerging picture indicates a majority of "blade-like" magnetic configurations. Commenting on the different behavior between generic points and those located in current structures, this is not surprising since the points located in current structures belong to regions which tend to have a specific shape and a common behavior. In particular, we note that they have an high planarity confirming the intuitive picture of nearly 2D current sheets. On the contrary for generic points, picked up almost from everywhere, also from regions where the current density is low, the magnetic configuration has a different behavior.
Finally, a major result is the behavior of the distributions for satellite data in agreement with the results of our simulations for generic points. In particular, the agreement is the most evident for SIM B-sf. This agreement is a very good result but not surprising for the following two reasons. First, since there is no selection on the values of J for the analysis performed on MMS time series, it was expected that any agreement with such analysis would have involved the sub-set of "generic points" from our simulations. Second, simulation SIM B-sf has a setup which is (2020), which already proved capable of qualitatively reproducing the turbulent and reconnection regime observed during that period. Thus we expected to find similar features in the configuration shape factors comparing SIM B-sf and these particular MMS intervals.
Analysis of eigenvalues' occurrence distribution and characteristic scale lengths, in simulations and satellite data
In Figure 8 we show the normalized occurrence distribution of N's eigenvalues: a) σ min , b) σ med , c) σ max for SIM A, SIM B-wf and SIM B-sf. We recall that these eigenvalues are interpreted as proportional to the squared inverse of the local variation length (see Section 4.1). We superpose to these distributions, obtained from simulations, the ones obtained using satellites data (dotted blue line) from the two high-resolution magnetosheath intervals analyzed in Stawarz et al. (2019) (with the additional selection discussed at the beginning of this Section; see Appendix B for Fig. 7: Normalized occurrence distributions of P and E for the three simulations SIM A, SIM B-wf and SIM B-sf, distinguishing between statistics on generic points (magenta line) and points which belong to current structures (black line). We superpose to these distributions, obtained from simulations, the ones obtained using satellites data (dotted blue line) from the two high-resolution magnetosheath intervals analyzed in Stawarz et al. (2019), see Appendix B for details. details). In the following, we note that the MMS eigenvalues are normalized using the local value of d i .
Let us consider first the occurrence distribution of the smallest eigenvalue (which is related to the largest characteristic length of the magnetic configuration). This is shown in column a) of Figure 8. For all three simulations there is almost no difference between the distributions for generic points (magenta line) and for points belonging to current peaks (black line) related to this eigenvalue. Indeed the two lines are almost superposed. This means that the typical largest scale length of a "generic" region does not differ significantly from that of a structure where the current peaks. Now let us consider instead the occurrence distributions for the other two eigenvalues (related to the median and shortest variation length of the local magnetic configurations; shown in column b) and c) of Figure 8, respectively). In all the simulations, the occurrence distributions of these eigenvalues Fig. 8: Normalized occurrence distribution of N's eigenvalues: a) σ min , b) σ med , c) σ max , for SIM A, SIM B-wf and SIM B-sf, respectively). In magenta, distributions for generic points, in black, for points belonging to current structures. We superposed to these distributions, obtained from simulations, the ones obtained using satellites data (dotted blue line) from the two high-resolution magnetosheath intervals analyzed in Stawarz et al. (2019), see Appendix B for details. Note the logarithmic scale on the x-axis. evaluated at generic points significantly differ from the corresponding distributions that are obtained considering only those points belonging to current structures. In particular, the difference is more pronounced for the smallest length scale, i.e. the one associated to the largest eigenvalue. In all cases, when compared to those obtained using generic points, the distributions obtained using only the points belonging to current structures shift towards larger σ. This means that where the current attains values higher that J th , the typical scale length of variation of the magnetic field are smaller (as it is somewhat expected, since the current is the curl of the magnetic field). Comparing SIM B-wf and SIM B-sf, which we remind the reader are identical except for the amplitude of the injected perturbation, one finds that the peak of the occurrence distributions is located at larger values of log σ med and log σ max for simulation SIM B-sf rather than for SIM B-wf. Thus, simulation SIM B-sf develops current structures with typical scale length smaller than those of the other simulations. This is true also for generic points, even if less pronounced. Such feature is likely a consequence of the fact that the two simulations develop different turbulent regimes (see discussion in Section 3). In fact, by injecting larger-amplitude magnetic fluctuations in SIM B-sf than in SIM B-wf, a shallower sub-ion-scale spectrum develops and, consequently, a larger amount of turbulent power reaches the smallest scales (cf. the magneticfield spectrum in Figure 1). Whether this is actually due to a weak-like versus strong-like turbulent regime (i.e., if it could be a consequence of whether critical balance and/or dynamic alignment establishes or not) or to other effects (e.g., development of a significant amount of electron-only reconnection events) is beyond the scope of this work and will have to await further investigation, as well as additional simulations. For current structures, we report the mean, median, standard deviation values in Table 3, 5rd-8th rows, comparing quantitatively SIM B-wf and SIM B-sf.
Concerning the comparison with the distributions extracted from MMS data Stawarz et al. (2019), we note a very good agreement with the distributions for generic points for SIM Bst. As for the previous Section 5.1, this very good agreement with SIM B-sf was somewhat expected since a similar simulation setup in the 2D case already proved to be capable to qualitatively reproducing the turbulent and reconnection regime observed in these intervals Califano et al. (2020). Thus, we proved once again that a relatively large-amplitude injection close to the ion-kinetic scale is able to reproduce several features that are observed by MMS in the turbulent magnetosheath past the bow shock-in particular, the development of similar 3D structures. Although such setup constitutes a quite interesting hint, the actual mechanism that could be behind such peculiar injection (e.g., the bow shock itself or the subsequent occurrence of micro-instabilities) is still unclear and requires further investigations, especially from the observational point of view.
Analysis of the orientation of magnetic field and current density
We analyze the orientation of the direction e min along which we measure the smallest eigenvalue with respect to the direction of the local magnetic field and current density by computing |b·e min | and |j · e min |, respectively. Here b and j are the unit vectors of the local magnetic field and current density. As it was done in previous Sections, this calculation is performed both on generic simulation points and also restricting only to points which belong to current structures. In Figure 9 we show the normalized occurrence distribution of |b · e min | (left column) and |j · e min | (right column) for all three simulations for generic points (magenta) and for points belonging to current structures (black). We superpose to these distributions those obtained by using satellites data (dotted blue line) from the two high-resolution magnetosheath intervals analyzed in Stawarz et al. (2019) (see Appendix B for details).
Both the local magnetic field and j are well aligned to e min for all three simulations. In particular, we note that the alignment is stronger for points which belong to current peaks, which means that for these configurations the alignment between magnetic field and e min (and the same for j) is the best one. The strong alignment between j and e min is expected since by definition the derivatives of B perpendicular to e min are the strongest ones. Instead, the alignment between the magnetic field and e min is less obvious and could depend on the specific environment that is being considered and/or on the initial parameters of our simulations. In general, we expect that the local magnetic field and the current density tend to align only when there is a significant component of "guide field" in the structure (opposed to those configurations where B vanishes within the current structure, as, for instance, in the typical setup that is employed to study magnetic reconnection without a guide field). Moreover, the good alignment between e min and B could also be due to Beltramization of the flow, namely the alignment between current and magnetic field, which is typical of small-scale turbulent structures (see e.g. De Giorgio et al. (2017)).
Also in this case, the comparison with the observative data from MMS Stawarz et al. (2019) is very good. In particular, we note that for the alignment |b · e min | the best agreement between MMS data and simulations "generic points" is found with SIM B-wf rather than with SIM B-sf. This is probably due to the fact that |b · e min | is strongly affected by the presence of a guide field within the current structure. In fact, if δB/B 0 is small enough as it is for SIM B-wf, the guide field is less distorted by the turbulent dynamics, and also the direction of weakest variation of the emerging current structures will align better with b. On the other hand, when large δB/B 0 fluctuations are injected, there is a significant distortion of the background magnetic field. As a result, the emerging structures can be embedded in magnetic shears where, with respect to the plane perpendicular to e min , there is a weak (or vanishing) guide field.
We note, as in the discussion of Figures 7 and 8, that the agreement of the MMS distribution with the "generic points" rather than with the dataset of points belonging to current structures was to be expected. Indeed, there is no selection on the values of J for the analysis performed on MMS time series.
Generic points or "background"
We briefly explain what implies to perform a statistical analysis on "generic" points of simulations or, equivalently, on long continuous sampling in observation data, referring to the methodology proposed in Fadanelli et al. (2019). First of all, we want to make sure that the ensemble of "generic" points picked in our analysis constitute a somewhat statistically representative subgroup of the whole grid points of the simulation, while simultaneously being comparable to the total number of points that belong to the current structures in such simulation (which is instead the sub-set used when restricting the analysis to these structures). Let us consider, for instance, SIM A, in which the total number of grid points is 352 * 352 * 198 ∼ 2.5 × 10 7 . In this case, the sub-set of "generic" points has been created by considering one point every 27 (corresponding to a collection of ∼ 10 6 points), i.e. around ∼ 4% of the original set. Analogously, the total number of points which overcome the threshold on current density at time t sat A are ∼ 1.1 × 10 6 (i.e., roughly 4% of the total). Moreover, in our sub-set of "generic" points those which overcome the threshold on current density are 4.8 × 10 4 , again roughly 4% of the sub-set. Thus, the sub-set of "generic" points that is being considered should adequately represent the ensemble of points of our simulation box. As we just discussed, the points which constitute the current structures are always a small percentage of the total number of points in a set or sub-set (i.e., the filling factor of the current structures in a volume is usually very small). Therefore, a natural question to ask is the following: can the behavior of a such small sub-group emerge when we consider histograms and plots which refer to "generic" points? (or, equivalently, when we consider a long continuous sample in observation data?) Based on our analysis, the answer to this question is no. This can be seen in Figure 10, which shows the occurrence distribution of elongation E versus planarity P for a) only those "generic" points that do not belong to current structures (roughly 4%), b) the whole sub-set of "generic" points (i.e., including those belonging to current structures). In fact, there is no noticeable difference between the two distributions. In partic- Fig. 9: normalized occurrence distribution of |b · e min | (left column) and |j · e min | (right column) for all three simulations, in magenta the distribution for generic points, in black that for points belonging to current structures. The distribution refers to times at fullydeveloped turbulence, in particular t sat A for SIM A, t sat B −w f for SIM B-wf and t sat B −s f for SIM B-sf. The distribution refers to times at fully-developed turbulence, in particular t sat A for SIM A, t sat B −w f for SIM B-wf and t sat B −s f for SIM B-sf. We superposed to these distributions, obtained from simulations, the ones obtained using satellites data (dotted blue line) from the two high-resolution magnetosheath intervals analyzed in Stawarz et al. (2019), see Appendix B for details. ular, for both cases the elongation and planarity have the same mean, median and standard deviation (see Table 3, rows 9 to 12).
The behavior of the current structures cannot emerge if we consider "generic" points, i.e. the equivalent of a long continuous sample in satellite data without introducing any additional selection on the turbulent time series. Therefore, in order to sys-tematically study the statistical behavior of the shape of current structures in satellites data, we suggest to fix a threshold on the current density and to perform the analysis only on those regions that overcome this threshold. This procedure is similar to what has been done when applying the AV method discussed in Section 4.2 to our numerical simulations. A similar kind of sugges-
Conclusions
In this work we have conducted a wide-spanning analysis of overall/non-local current structures and local magnetic configurations emerging in plasma turbulence. The analysis is based on three 3D-3V HVM simulations with different box sizes, resolution and energy injection scales. We focused first on the characterization of the shape of current structures. Our "reference method" defines two parameters, elongation E and planarity P, which can be calculated in a simulation from the three characteristic dimensions of any structure ( max , med and min ). We have shown how it is possible to reliably estimate the shape of a current structure also by employing three different local methods, namely HJ, NB and AV (see Section 4.2 for details on the methods). The rigorous computation of E and P via the "reference" non-local method can be performed only on simulation data. The HJ and NB methods too can be performed only on simulation data, since they require to be applied on the center of a current structure (whose precise position can be known only on simulation data). These two "local" methods are faster than the other two (i.e., the "non-local" and the AV methods), since they require to be computed on a restricted number of points (namely, only on the local maxima of the current density). Finally, the AV method, the one in best agreement with the results of the "reference" method, can be applied with minor modifications also on spacecraft data, once a threshold on the current density has been properly defined on the time series under consideration. Based on these four methods (i.e., "reference/non-local", HJ, NB and AV), we have analyzed the distribution of shape factors (i.e., planarity and elongation) for the emerging current structures, with the result that all methods coherently find that they are composed of mainly "knife-blade"-like structures. This picture is different from the one that emerges in Meyrand and Galtier (2013) where they claim the presence of mostly cigarlike structures (they use the expression "filament like") through visual inspection of their 3D Hall-MHD simulations of turbulence. This suggests that ion-kinetic effects (i.e., beyond just the Hall term) and/or electron-inertia terms could significantly affect (and likely be required to correctly describe) the development of current structure in plasma turbulence across the transition range and at sub-ion scales.
Additionally, we studied the local magnetic configurations by performing an analysis on the simulation data similar to the one proposed by Fadanelli et al. (2019) for satellite data. In particular, we have investigated the magnetic configurations by analyzing the distribution of their planarity and elongation, of their three characteristic scale lengths, and of their orientation with respect to the magnetic-field and current-density directions. Such analysis has been performed both on all points belonging to current structures as well as for "generic" points belonging to a uniform sampling of the simulation box (the aim of this latter set being to mimic long and continuous time series from satellite data). In general, we found different results when we apply the analysis only to those points belonging to current structures or to "generic" points in the simulation domain (i.e., including, but not limited to, structures). In particular, the main statistical difference between generic points and those located inside current structure is in the results obtained for the distribution of planarity P: "knife-blade" shapes are more likely present when considering only those points where the current density is above a certain threshold J th , while the abundance of "thicker" sheets (or "ellipsoids") is enhanced when points below J th are included. The behavior of the distributions for generic points for planarity P and elongation E is coherent with the distribution we obtain if we apply the same kind of analysis to high-resolution (burst) MMS data collected from the two turbulence crossings in the magnetosheath and analyzed previously by Stawarz et al. (2019), and selected in order to fit simulations' characteristics (see Appendix B for details).
In the analysis of variation scale lengths, we found a sensible difference in the distributions for the largest eigenvalue between generic points and points within structures, suggesting that the smallest characteristic length-scale min is significantly shorter for current structures. We noted a difference in this context also between SIM B-wf and SIM B-sf, for which all the three characteristic length scales are shorter in the strong-forcing scenario (SIM B-sf) than they are when lower-amplitude fluctuations are injected (SIM B-wf). We remind the reader that in both simulations SIM B-wf and SIM B-sf the energy is injected only slightly above the ion characteristic scales, as done in Califano et al. (2020), that was able to reproduce, in a simplified 2D-3V configuration, the electron-only turbulent regime observed by MMS past the bow shock Phan et al. (2018); Stawarz et al. (2019). Concerning the comparison with the distributions extracted from these precise MMS data Phan et al. (2018);Stawarz et al. (2019), we found a very good agreement only with the distributions for generic points from SIM B-sf, thus suggesting that similar, small-scale current structures require a high level of forcing acting close to the ion-kinetic scales.
Finally, from the analysis of the local orientation of the magnetic field and the current density with the minimum variance direction e min , we found strong alignments for both fields, more pronounced for points which belong to current structures. Also in this case, the comparison with the observative data from MMS Phan et al. (2018); Stawarz et al. (2019) is very good. In particular, we note a good agreement with the distributions for generic points in SIM B-wf.
All the analysis presented in this work clearly highlights how magnetic configurations inside current structure exhibit peculiar features that can be retrieved only by considering solely those points where J attains values above a certain threshold. Indeed, we have shown in Section 5.4 that the behavior of the current structures cannot emerge if we consider "generic" points, i.e., analogous to consider a long continuous time series in satellite data without any further selection based on the values of J, since for such a sample the number of points belonging to current structures will be only a small percentage of the total. Therefore, in order to systematically study the shape of current structures in satellites data, we suggest to fix a proper threshold on the current density and consequently consider only regions that overcome this threshold for the subsequent analysis. Even better, one could apply the AV method that we have described in Section 4.2 in order to isolate different structures (after applying straightforward modifications in order to adapt the method to simple 1D time series rather than to complex 3D spatial domains).
In conclusion, the results reported in this paper would be useful not only for the analysis of turbulence simulations, but also for observative studies. Indeed, 1) we have validated the possibility to apply local methods, which are the only ones applicable on satellite data, to infer the overall/non-local shape of current structures; 2) we conjecture that imposing a proper threshold on the current density would benefit for the statistically study of current structures in satellite data; 3) we have provided an overview of local magnetic configurations emerging in different turbulence regimes, also stressing the different behavior that is found when considering exclusively points within current structures with respect to what emerges from the points belonging to the rest of the turbulent environment; 4) we have shown via such magnetic configuration analysis that, when there is a mechanism that inject relatively high-level fluctuations close to the ion-kinetic scales, our 3D-3V simulation can reproduce the structures that emerge in MMS data for the periods studied by Phan et al. (2018); Stawarz et al. (2019).
Fig.
A.1: scatter plot thickness vs current density for SIM 2D at t eddy−turnover 2D , to show the differences between the two methods in estimating thickness. The current density is computed at the center of each current structure. With "mask method" we mean the method to find thickness explained in Section 4.1, which uses the maximum distance between points satisfying J ≥ J th . On the contrary, with "FWHM method" we refer to the alternative method explained in Appendix A which computes thickness using full width at half maximum of the interpolated profile of J. In the sub-panel we show a schematic representation to explain the difference in estimations between the "FWHM method" and the "mask method".
Appendix A: Alternative definitions for structure's dimensions
In our work, we defined the thickness min as the maximum distance between two points with J ≥ J th along the direction of strongest variation of the current density, as given by the eigenvectors of the Hessian matrix. We present here an alternative definition for thickness, as proposed by Zhdankin et al. (2013), and we show how some of our results change when using this different definition. In particular, we can alternatively define thickness considering the interpolated profile of J along the direction of strongest variation given by the Hessian matrix and computing it as the width at half maximum of this profile. The two methods can produce different estimations for thickness depending on the value of the current density peak. In Figure A.1 we show the scatter plot thickness versus current density for SIM 2D at t eddy−turnover 2D : in red the estimations given by the method used in the main text (maximum distance between two points with J ≥ J th , for brevity, called "mask method"), while in blue the ones given by the method used in this appendix (full width at half maximum of the current density profile, called "FWHM method"). We chose the 2D simulations for this comparison and the eddy-turnover time rather than the saturation time just because in this simulation, at this time, current structures are not so many to make the scatter plot difficult to be interpreted. The current density is computed at the center of each current structure considered. We can see that the two estimations turn out to be different, especially at low values of current density. In the sub-panel, we show a schematic representation to explain the difference in estimations between the "FWHM method" and the "mask method". In particular, when the current density at the center of the specific structure is greater than 2J th , thickness estimated using the "FWHM" method is generally bigger than the one estimated using "mask method", on the contrary when the current density is lesser than 2J th the thickness obtained with the "mask method" is the biggest one. The two values are almost equal when J ∼ 2J th . If we use this alternative definition of thickness for computing the structure's shape factor planarity, some features change (we note that the estimation of elongation does not change since we haven't changed the definitions of width and length). In particular, in Figure A.2 we show the new distribution for E and P which emerges, which is a bit different from the one of Figure 4. In particular, to quantify, now we have: for the elongation mean∼ 0.8, median ∼ 0.9, standard deviation ∼ 0.2, instead for planarity mean ∼ 0.4, median ∼ 0.4 and standard deviation ∼ 0.2. The distribution is lesser in agreement with the ones for HJ, NB, and AV method showed in Figure 5, and the most of the structures have a planarity in between 0.2-0.6, thus are more filament-like.
The width (the intermediate length) is defined as the maximum distance between points having J > J th , in the plane perpendicular to e min and passing from the current peak of the structure. For the sake of coherence, we prefer the "mask method" for defining the thickness. In such a way, a cylindrical structure would have, as it is, coherent values for both width and thickness, independently from the intensity of the current.
Appendix B: Selection and processing of MMS data
In a number of passages in the main text, we referred to the two burst-resolution MMS data intervals analyzed in Stawarz et al. (2019), which have been chosen here for comparison with the results of numerical experiments detailed in Section 2. Choice of these MMS data intervals is motivated by the fact that our simulation set-up (in particular for SIM B-wt and SIM B-st) is similar to that used in Califano et al. (2020), which was able to reproduce the turbulent and reconnection regime observed during that period.
Throughout our analyses, MMS data relative to the magnetic field has been taken from the FluxGate Magnetometer (FGMsee Russell et al. (2015)), while the Fast Plasma Investigation (FPI -Pollock et al. (2016)) has provided the density and pressure measures which contributed to determine the ion plasma beta and ion inertial lengths. All these aforementioned fields have been interpolated onto the MMS2 magnetic field data, and averaged over the four spacecraft fleet. The J value has been obtained by applying the curlometer technique by Dunlop et al. (1988) and the N tensor was derived as in Fadanelli et al. (2019) i.e. by performing a linear estimation of the magnetic field gradient, then combining the resulting values with the four-spacecraft averaged magnetic field data.
In order to compare results from MCA applied over simulation and MMS data, we need apply three different levels of selection to the latter. In particular: 1) we select points for which at least two of the eigenvalues of the N matrix are well-determined by MMS measurements; 2) we select data with β ∈ [0.3, 3]; 3) we select data with a resolution comparable with the one of our simulations.
More in detail, the first selection has been performed exactly like in Fadanelli et al. (2019) i.e. by calculating the average interspacecraft distance S C at each instant and then setting at ( δB S C B ) 2 a minimal resolution threshold for the eigenvalues of N, being understood that any eigenvalue below such a threshold is not well-resolved. Requiring that at least two of the eigenvalues are well-resolved signifies that, on one hand, we accept uncertainty on the least eigenvalue and elongation measures but, on the other hand, we are still able to determine correctly the direction of minimum variation (this is because the corresponding eigenvector is, by construction, perpendicular to the other two, which are well determined). The requirement that two eigenvalues are well resolved is generally a good compromise between the need for precision and the difficulty of determining correctly the smallest eigenvalue, which is generally extremely small and therefore easily falls below the MMS resolution threshold. Choosing the two-well-determined-eigenvalues selection criterion for the two turbulence intervals we consider here implies that over the 99% of original data is retained by this first procedure. The second selection just described is performed over the ion plasma beta obtained by the ratio of spacecraft-averaged kinetic and magnetic pressures at each datapoint. For the turbulence intervals we consider in this work, this is the procedure which leads to the largest reduction in the availabel dataset, which gets reduced to about 20% of its original size through it.
With the third and last filtering of MMS data we intend to eliminate all those points for which simulation and satellite data have different resolutions. To compare precision of MMS data and of simulations, we introduce two quantities that we call "resolution factors" and are defined as follows: for the MMS (satellite) measurements, the resolution factor is the inter-spacecraft separation divided by ion inertial length (we remember that the only possible derivative calculation in this case follows from a linear interpolation of satellite measures) for the simulations, the resolution factor is max( 3dx d i , 3dy d i , 3dz d i ) and by this definition we intend to acknowledge that the effective minimal distance over which a HVM numerical simulation can well represent plasma physics should be several times (here we have chosen three) the distance between neighboring data points.
When considering MMS data analyzed in Stawarz et al. (2019) the resolution factor is almost always below 0.3. This resolution factor is thus comparable with the one we obtain for our simulations, in particular for SIM B-wf and SIM B-sf, for which it is 0.22, in all directions (instead, we obtain 0.9 for simulation run SIM A, corresponding to the z-direction which is the less resolved, while for the x and y-direction we obtain 0.5). Given these values, we have decided to accept all the previously selected data into a MCA procedure which is to be compared to that performed over HVM simulation results.
By the whole procedure just detailed, a dataset containing about 200 thousand points has been selected. Given these data, it is possible to obtain a valid statistic only for what we called "generic" points, since any further selection aiming to retain only samples retrieved inside current structures would leave us with no more than few thousand points, which are not sufficient for statistics in our case. | 15,898.4 | 2021-07-29T00:00:00.000 | [
"Physics"
] |
Common defects in injection molding of plastic products and their influence on product quality
: The article deals with issues related to quality management and quality assessment in production of plastic articles in injection moulding. Expert knowledge collected in textbooks and literature allows to get acquainted with the characteristics of plastic article production and product quality defects arising in such processes. The characteristics and technology of plastics processing are discussed, the most frequent quality defects occurring in the production of articles made of plastics by injection molding are listed. On the basis of expert knowledge collected in the literature, a series of actions leading to the elimination of each of the mentioned quality defects has also been proposed.
INTRODUCTION
Plastic articles have become an important and integral part of modern life. Plastics are used to make articles, parts and machines that we use every day. The clothes we wear, the cars we drive, the devices we use to communicate with each other, the containers we use to store food or objects -all these things are partly or entirely made of plastics. Plastics owe their popularity to their low manufacturing costs compared to the materials they replace (wood, ceramics, metals). Plastics are also characterized by the fact that they can be given properties that are not available for other materials (color, flexibility, hardness, nonflammability, odorless). The advantages of this material are therefore the ease of processing, recovery and the wide spectrum of applications [13].
Quality of product is one of the critical measures of production evaluation. Introducing faulty products on the market, which do not meet quality standards, may have serious consequences for the company's image and consumer safety. Therefore, it is an important issue of the production management field, and its beginning is at the stage of product design as part of production preparation. Quality is defined as a set of product features, which aim to satisfy the customer to the assumed extent, and together with other product features, it affects the competitiveness of the product. However, it is rare to achieve the quality of the product compliant with the design assumptions, therefore the permissible deviations are determined, which allow to state the compliance of the product with the accepted quality standards [8].
Quality features of a product within the quality dimensions can be divided due to the possibility of their measurement into [7]: 1. Measurable (numerical) -the result of the measurement can be expressed numerically by a specific unit of measurement. There is a distinction between continuous and discrete measurable characteristics. Continuous measurable characteristics are those whose measurement is limited by the resolution of the measurement method. Discrete measurable characteristics are those whose measurement allows to express a finite number of states. 2. Non-measurable (alternative) -they cannot be measured or expressed numerically. Such qualitative characteristics, on the other hand, can be observed and studied, and they are expressed descriptively.
Defining quality in the area of industrial manufacturing in a simple and systematic way may prove to be a challenge due to the labile nature of this concept. Undertaking the quality assurance of a product must take into account many factors which are guided by a potential buyer. The manufacturer should ensure the competitiveness of the product, meet the quality requirements imposed at the design stage and the requirements set out in the law on safe operation, ensure reliability (to increase user satisfaction, but also to reduce servicing costs), and ultimately -meet customer expectations [11].
PRODUCTION OF PLASTIC ARTICLES
This chapter deals with the production of plastic articles and describes the technology of manufacturing.
Extrusion
Extrusion usually takes place on machines (Fig. 2) equipped with a feeder, screw system and extrusion nozzle. Feedback heater systems are used for precise temperature control. The screw system is usually divided into three zones; the feed zone, where material is loaded into the screw system; the transition zone, where material is plasticized; and the melt-conveying zone, where polymer is driven into the extruder. The shape of the nozzle opening determines the final crosssectional geometry of the extruded material [10].
Blow moulding
Blow moulding is the forming of a hollow object (such as bottles) by blowing a thermoplastic molten tube called a "parison" in the shape of a mold cavity. The process consists of extruding a parison on which female mold halves are closed (Fig. 1). The mold halves contain the shape of the product. The bottom opening of the parison is pinched shut by the closing female mold halves. A pressurized gas (usually air) is introduced into the parison through blow pin blowing the heated parison out against the cavity walls to form the product [1].
Calendering
Calendering plastics is a process that is used in many industries to produce rolled sheets of precise thickness and appearance (Fig. 3). Calendering of molten polymers is a continuous sheet or film production process that is done by passing materials through sets of heated, counter-rotating rolls [12].
Injection molding method
Injection molding is a major processing technique for converting thermoplastic and thermosetting materials, consuming worldwide approximately 32% of all plastics [9].The injection molding process is done on injection molding machines (Fig. 4). It is considered the dominant process in plastics processing. The process is carried out in cycles and consists of plasticizing, melting and mixing the plastic, followed by injection into an injection mold. Figure 5 shows schematics of injection molding machine. [14] Injection molding machines consist of a plasticizing system (similar to those in extruders), a drive system, and a tool system. Below are described the following phases of the injection molding process [5]: 1. Mold closingthis is the phase in which the tool is closed. This is accomplished by rapidly advancing the moving part of the mold. 2. Injectionduring this phase the injection system is moved against the tool. The nozzle is brought into contact with the tool sleeve. By rotation of the screw in the plasticizing system the liquid material is injected into the tool. 3. Pressingthe screw in the plasticizing system is pressed against the nozzle, which causes the filling of plastic into the tool seat. This phase is necessary because of the resulting lack of plastic volume. The volume of plastic pressed into the mold decreases as a result of shrinkage of the plastic. 4. Plasticizationa rotary screw movement takes place. As a result, the plastic is taken from the feeder. The screw moves to the initial position. During its movement, the material taken from the screw is subjected to temperature in order to plasticize, mix and compress it (compression of plastic takes place due to the shape of the screw). 5. Openingduring this phase, the closing pressure of the fixed and moving mold parts decreases. The mold is opened and the part is ejected from the tool. Ejection from the tool is usually carried out by means of ejector pins integrated in the tool.
In their study, Garbacz and Sikora [5] list the following factors that are important to the injection molding process: temperature of the heating zones of the plasticizing system, temperature of the injection nozzle, mould temperature, injection pressure, clamping pressure, mould closing force, screw speed, screw stroke, cycle time and time of each cycle phase, injection speed.
Rotomoulding
Rotational molding (also called rotomolding) is the process of making large, hollow objects out of plastic. It is characterized by the fact that the resulting products have no seams. The parts produced have a wide range of applications (from tanks to containers or furniture). Rotomoulding is done by heating a thin-walled mould which contains a polymer powder (Fig. 6). During heating, the mold rotates multiaxially. The material inside heats up, melts and coats the inside walls of the mold. Then the mold is cooled, allowing the product to solidify. The part is removed and the mold is loaded for the next cycle [4].
QUALITY AND DEFECTS OF PLASTIC ARTICLES IN INJECTION MOLDING
As with all manufacturing methods, quality defects can occur during the injection molding process causing deficiencies or unsatisfactory part properties. Selected quality defects of molded parts occurring during injection molding are described.
Inclusions
Those are the dark spots visible on the surface of a molded part (Fig. 7). The cause of inclusions is usually contamination of the raw material (granulate) or its degradation [6]. The actions to be taken to eliminate this defect are [6]: reduce the temperature in the plasticizing unit, reduce the injection speed and pressure, clean the machine, check the plastic contamination.
Brittleness/breakage and weld marks
The tendency of the molded part to excessive cracking and crumbling at low pressure or impact is most often caused by degradation of the raw material during the process. The moldings also crack at the plastic welding points (Fig. 8), where micro-cracks lead to weakening of the structure [6]. Actions that can be taken to eliminate this molding defect are [6]: reduce the temperature in the plasticizing unit, check contamination of the raw material, check moisture content of the raw material, reduce the proportion of regranulate in the mix.
Gassings
Gassings (Fig. 9) occur due to improper mold ventilation and gas entrapment at the time of plastic injection [6]. In order to eliminate the problem of outgassing, it is necessary to [6]: introduce structural changes in the mould in order to change the polymer flow or use a different injection point, change the location of the injection point, adjust the wall thickness.
Discoloration
Discoloration or streaks are usually caused by uneven dye dosage at the plasticization stage (Fig. 10). They can also be a result of incompatibility of the polymer with additives used [6]. In order to eliminate the problem of discoloration and streaking, it is necessary to [6]: check the compatibility of the polymer with the additives used, check the screw and screw housing for damage, increase the injection speed, change the screw rotation speed parameter, check the temperature of the plasticizing process, increase the pressure during the plasticizing process.
Ejector pin marks
Ejector pin marks occur as visible dots, a pinshaped difference in surface gloss (Fig. 11), or white marks caused by material stress [6]. To solve the problem of ejector pin marks appearing, you can [6]: make sure that the ejector pins are finished smooth with the inner surface of the mold, increase cooling time, reduce the reheating temperature of the mold, reduce the temperature of the plasticizing process, install additional ejector pins or increase the diameter of those present.
Flashes
Flash (Fig. 12) occurs when the material flows out of the mold pocket during the injection process. The problem usually occurs with low viscosity plastics. It is a common problem with first production runs on a new tool (mold) when the mold surfaces have not yet aligned [6]. To solve the problem of spall formation, one can [6]: reduce the injection speed, reduce the temperature of the plasticizing process, check the condition of the mold for damage, increase the clamping pressure of the mold.
Differences in surface texture
Differences in surface texture are considered a quality defect when they occur despite the uniform surface finish of the injection mold [6].
In order to prevent the discussed quality defect, one should [6]: make sure that the mold surface has not been damaged, increase the temperature of the plasticizing unit, increase the clamping pressure, increase or reduce the injection volume.
Short molding
Under-shots (Fig. 13), are material deficiencies usually occurring at locations away from the injection point or where there are thin walls of the product. Typically, underflows are caused by insufficient injection volume, screw valve leakage, or insufficient injection pressure [6]. In order to eliminate the possibility of underflow, it is necessary to [6]: increase the injection volume, increase the injection speed, increase the temperature in the plasticizing unit increase the mould temperature, apply reheating of the mould, choose a material with lower viscosity, increase the injection point size, change the position of the injection point.
Melt flow marks/streaks
Plastic flow marks (Fig. 14) are usually caused by too high humidity of the raw material. Most often they occur near the injection point, spreading radially from this place [6]. In order to prevent this quality defect, it is necessary to [6]: control the moisture content of the raw material, dry the raw material and ensure that the influence of external conditions on the plastic is limited, reduce the amount of raw material in the tank above the plasticizing unit, ensure proper storage conditions of the raw material.
Part mass
The mass of a molded part provide important information about the quality of its manufacturing. Production technology assumes that the mass must be within the limits determined experimentally or at the stage of product design. The exceeding of certain deviations from the acceptable mass indicates a problem with the process and, in effect, a deterioration of the product quality by weakening its structure and strength or other quality defects such as defects, gassings, underflows or overflows [3].
Insufficient product weight may be associated with weakened structure and strength of the product and the occurrence of short molding [3].
Excessive product weight can also be linked to weakening of the structure and strength of the product and to the occurrence of flashes or deformation. Improper mass can be linked to problems such as getting the part stuck in the mould (due to shrinkage) [3].
SUMMARY AND CONCLUSIONS
The article discusses the issues concerning the production of plastic products with particular emphasis on the injection molding method. The quality defects of products made by this method were discussed. Thanks to the knowledge collected in the literature, solutions eliminating the mentioned quality problems were also proposed. This study allowed us to come up with the following conclusions: 1. Plastics owe their popularity to their low manufacturing costs compared to the materials they replace. They are also characterized by the fact that they can be endowed with properties unattainable by other materials. The advantages of this material are its ease of processing, recovery and wide range of applications. 2. The field of plastic injection molding is extensive.
The quality of products manufactured by this method is influenced by many process input factors but also by auxiliary processes (e.g. raw material storage conditions). 3. As with all manufacturing methods, quality defects can occur during the injection molding process of plastic articles resulting in deficiencies or unsatisfactory properties of the molded part. 4. The quality control methods and the features to be controlled for plastic products should be chosen with regard to the technology of a given product, but the most frequent defects described in the literature analysis are characteristic for the production of this type of products. Their occurrence depends on many factors such as the degree of machines and tools exploitation, kind of raw material. It should also be pointed out that they are strongly connected with technological parameters (settings of injection moulding machines) of the process. 5. Expert knowledge accumulated in textbooks and scientific studies summarized in this article allows to make an attempt to eliminate quality defects of moldings. At the same time the mutual dependencies of the process input parameters influencing their occurrence are pointed out. It is worth mentioning the high complexity of the injection molding process, therefore the proposed solutions are of limited use. All factors should be taken into consideration when trying to eliminate the quality defects of the molded parts (injection molding machine settings, type of raw material, raw material storage, mold condition, etc.). | 3,669.4 | 2021-08-12T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
ImageServer, a Tool for On-line Processing and Analysis of Biological Images
Abstract We present a novel tool for processing and analysis of images of gene expression patterns stored in a relational database. This tool, known as ImageServer, is portable across different software/hardware platforms and supports both basic and subject domain oriented operations on images. The tool is used to process and analyze images stored in FlyEx database (http://urchin.spbcas.ru/flyex); software and documentation are available for download from http://urchin.spbcas.ru/downloads/IS/IS.htm.
Introduction
In biology visual information has been estimated to represent as much as 70% of all data generated. In recent years the widespread use of computers and digital image capture devices has resulted in accumulation of large amounts of digital images. However, in spite of these technological advances visual information is still for the most part analyzed by qualitative methods. Image processing is usually performed by graphic packages which are installed on a local computer and do not have any built-in tools for image management [1,2]. This approach requires to download an image from a database to a local computer in order to process it, as well as to insert the processing results in the database again. So far only a few systems provide solution for integrated storage, processing and analysis of image data [3]. Thus the development of software for database-based processing and analysis of visual information is an important task for biology and computer science.
In this work we present a software tool ImageServer, which is designed for on-line processing and analysis of biological images stored in a database. ImageServer performs both basic and subject domain oriented operations on images. Currently the target application of this tool is processing and analysis of images of segmentation gene expression patterns in Drosophila stored in FlyEx database [4]. FlyEx is a spatiotemporal atlas of segmentation gene expression. It was developed using IBM DB2 RDMS to answer questions about dynamics of formation of segmentation gene expression domains.
in particular, GET method. Clients interact with the tool via the HTTP protocol that allows to use both FireWall and Proxy servers. ImageServer is permanently waiting for client requests listening to an IP-port with a given number. To guarantee parallel and independent work of several clients a separate thread is created for each client's request. Clients invoke ImageServer by including the standard tags <image> into the body of a HTML page. The <image> tags have to contain the server URL and the parameters -images (operands), operations and other settings. IS can access images which are stored in a relational database as BLOBs, image files in popular graphic formats (JPEG, GIF, TIFF, BMP, PNG, etc.), as well as the files in RAW format, which represents the byte array of intensities for each image pixel. By default, the target image has the JPEG format, however the output format can be specified explicitly using the corresponding parameter. To provide software independence ImageServer interacts with a database via the Java Database Connectivity (JDBC) protocol. The XML template makes it possible to easily connect the ImageServer to any database. In this template the information about user, password, database, as well as a SQL query should be specified. The SQL query contains the identifiers, which are replaced by the values of parameters present in the HTTP request.
Methods for image processing
ImageServer is currently applied to process and analyze images of segmentation gene expression patterns in Drosophila stored in the FlyEx database. The basic image processing operations are implemented by means of the JMagick class library, which represents a Java interface to the ImageMagick package. These packages are publicly available [5,6]. The subject domain oriented methods for image processing, namely background removal and image registration, are described below.
Description of subject domain
FlyEx is a quantitative atlas of segmentation gene expression at cellular resolution [4]. Segments are repeated units of the insect body. In the fruit fly the segmental architecture is determined at the 'syncytial blastoderm' stage, in particular at cleavage cycle 14A which lasts from 130 to 180 minutes after fertilization. [7]. At this stage of development an embryo is a hollow asymmetrical ellipsoid of nuclei which are not separated by cell membranes. The initial determination of the segments is a consequence of the expression of 16 genes which are mainly transcription factors [8,9]. fluorescence tagged antibodies. These images served as a raw material for quantification of gene expression [10,11,12]. As a result the reference data on expression of segmentation genes at cellular resolution and at each time point were constructed [12,13]. Images and quantitative gene expression data from individual embryos, as well as reference gene expression data were used to study the dynamics of formation of segmentation gene expression domains, precision of development and pattern formation, as well as the mechanisms of segment determination [14,15].
Removal of background signal
At the very first stage of data quantification the non-specific background signal is removed from images of gene expression. The aim of background removal is to bring the data to the unified standard form with a zero background and to get rid of distortions of gene expression patterns caused by the presence of a background signal. Our method [11] is based on the observation that the level of expression of a given gene in a null mutant embryo for that gene is well fit by a very broad two dimensional paraboloid. This paraboloid is automatically determined from the areas of wild type embryos in which a given gene is not expressed and the whole image is then normalized by the paraboloid to remove background from the entire embryo by a linear rescaling of pixels' intensity. The coefficients of the paraboloid are precomputed and stored in the database.
Registration
Image registration is necessary to eliminate small spatial difference between expression patterns of one gene in individual embryos of one age. We have developed a registration method based on the minimization of the squared distance between extrema of expression pattern of even-skipped gene in different embryos by affine coordinate transformation.
Locations of extrema are determined by two methods: quadratic spline approximation and fast dyadic redundant wavelet transform (FRDWT) [10,11,12]. The coefficients of affine transformation computed for both methods are stored in the database for all the images.
User interface
To retrieve images a user fills query forms, which can be accessed from the main page of the FlyEx database by selecting links "Images of gene expression patterns" or "Analysis tools. Images of gene expression patterns." The control panels for analysis of retrieved images are placed at the bottom of the HTML page containing a query result (Figure 2).
Analysis of image information
A single image can be subjected to scaling, cutting of rectangular area, filtering of fluorescence intensity, contrast enhancement and background removal. Admissible operations on a set of images are masking of one image by another, combination of up to three greyscale images into the color one, generation of an absolute value of difference between two images, registration of several images. It is possible to combine several operations in a single request.
Combining of up to three greyscale images into the color one
The confocal microscope at our disposal permits to detect the expression of up to three genes in one embryo, for each gene a greyscale image is obtained and stored in the database. For simultaneous visualization of expression of genes scanned in one embryo the greyscale images can be combined into one color image, in which the expression pattern of each gene is coded by one of the basic colors of the RGB format (see Figure 3).
Masking of one image with another
The data quantification includes as an essential step the segmentation of images. To construct a binary nuclear mask each pixel on the raw image is classified as belonging to a nucleus or not, so that on the mask a pixel is equal to one if and only if that pixel is located on a nucleus. Hence the quality of quantitative data is defined by the accuracy of nuclear mask. A user can observe a mask ( Figure 4A) and superpose it on the image of expression patterns. This results in the masked image displaying the localization and shape of nuclei ( Figure 4B). Masks of all the embryos are stored in the database.
Figure 3. Visualization of expression patterns of genes scanned in embryo dr2. The greyscale images of expression patterns of genes even-skipped (A), hunchback (B) and knirps (C) obtained with confocal microscope. (D) The resultant color image with even-skipped in red, hunchback in green and knirps in blue. Images are from FlyEx database, image in D is generated on-the-fly by ImageServer.
A B Figure 3D. Here only those pixels are colored that belong to the nuclear mask and are white in Figure 4A.
Estimation of background signal
A B Each raw image contains a certain amount of background signal which can be observed visually. To estimate the level of background a user has a possibility to view one and the same image before ( Figure 5A) and after ( Figure 5B) background removal. It is evident that the image without background is much more contrast. It should be noted that the image of background alone can not be obtained by simple subtracting the image with no background from the original one because the background removal algorithm requires to rescale the pixel's intensities (2.2.2). The background image reconstructed from the paraboloid coefficients is not informative visually due to its low contrast. Using the web interface shown in Figure 2 the spatiotemporal variability of gene expression can be visually checked. Figure 6 shows the variability of expression of highly dynamic evenskipped gene. For each time point (i.e. cleavage cycle). four images representing the gene expression in individual embryos are displayed. The central 10% strip along A-P axis was cut from each image for better observation. The degree of spatial variability can be easily seen by comparing the images of embryos belonging to one cleavage cycle. Temporal variability can be estimated from comparison of images from different cleavage cycles. At cleavge cycles 10 and 13 even-skipped is expressed in one broad domain, at late clevage cycle 14A 7 narrow isolated domains of expression are formed.
Temporal dynamics of gene expression
The temporal dynamics of gene expression can be visualized by comparing the images of embryos of different age. ImageServer allows to combine up to three greyscale images of expression patterns into the color one (see 3.2.1). Figure 7D illustrates the result of combining of three images displaying the expression pattern of even-skipped (eve) gene at different time points, namely temporal classes 1, 4 and 8 of cleavage cycle 14A. In the resultant image each combined image is coded as one of the basic colors of the RGB format. Thus the areas, where eve is expressed at all times, appear as blends of colors, while the areas, where eve starts or stops to express, are displayed as one of the basic RGB colors. It is evident that the leftmost eve expression domain (the first stripe) has a stable position, while all the rest six stripes of this gene expression move to the left (to the anterior) with time, and the movement of the rightmost (the posterior) stripe is the most pronounced.
Estimation of the accuracy of registration
A B D C
Discussion
Image management and processing of image information are usually performed by different types of software. Images are processed by graphic packages which do not keep track of data and images in a rigorous way, while databases simply present selected images and metadata to a user [16,17]. The solution, which we propose here, allows to integrate image processing and analysis with the information storage. We have designed the application server, which on the one hand can access images stored in any database or file management system and on the other hand supports different operations on images ranging from image scaling to registration of several images. Due to application of Java programming language ImageServer can be ported to any software/hardware platform.
The efficiency of on-line processing and analysis of images stored in a database to a great extent depends on the size of images. Currently ImageServer is applied to process and analyze images of segmentation gene expression patterns in Drosophila stored in FlyEx database. These images are not large: the typical size of an image of expression pattern of one gene scanned in one embryo is at maximum about 1300 650 pixels and the typical size of an image file in JPEG format is about 60K. While in our system the joint operation of image retrieval, conversion to JPEG format and visualization takes about 300 msec, the execution of the same operation, as well as other image processing/analysis operations on larger images could take a considerable time. To speed up the performance of ImageServer on images of larger size we plan to modify this tool to support processing and analysis of images subdivided into tiles.
ImageServer can be easily extended to support processing and analysis of new images by addition of new basic and subject domain specific operations implemented as program modules written in C++ or Java. This tool is the core of the Laboratoty Management System for processing and analysis of images of gene expression patterns in situ, which is currently being developed by authors. The ImageServer software and a test database can be downloaded from http://urchin.spbcas.ru/downloads/IS/IS.htm. | 3,117.8 | 2005-12-01T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Oxidative Precipitation of Manganese from Acid Mine Drainage by Potassium Permanganate
Although oxidative precipitation by potassium permanganate is a widely recognised process for manganese removal, research dealing with highly contaminated acid mine drainage (AMD) has yet to be performed.The present study investigated the efficiency of KMnO 4 in removing manganese from AMD effluents. Samples of AMD that originated from inactive uranium mine in Brazil were chemically characterised and treated by KMnO 4 at pH 3.0, 5.0, and 7.0. Analyses by Raman spectroscopy and geochemical modelling using PHREEQC code were employed to assess solid phases. Results indicated that the manganese was rapidly oxidised byKMnO 4 in a process enhanced at higher pH.The greatest removal, that is, 99%, occurred at pH 7.0, when treatedwaters presented manganese levels as low as 1.0mg/L, the limit established by the Brazilian legislation. Birnessite (MnO 2 ), hausmannite (Mn 3 O 4 ), and manganite (MnOOH) were detected by Raman spectroscopy. These phases were consistently identified by the geochemical model, which also predicted phases containing iron, uranium, manganese, and aluminium during the correction of the pH as well as bixbyite (Mn 2 O 3 ), nsutite (MnO 2 ), pyrolusite (MnO 2 ), and fluorite (CaF 2 ) following the KMnO 4 addition.
Introduction
The oxidation of sulphide minerals exposed to oxygen and water produces acid effluents commonly referred to as acid mine drainage (AMD), described in detail elsewhere [1].Although the generation of acid drainage is a natural phenomenon, mining activities can dramatically increase its production due to the large amounts of material usually exposed.In addition to its main characteristics (e.g., high acidity and sulphate levels), AMD features extensive chemical diversity, including metals such as iron, aluminium, and manganese in elevated concentrations [2].These effluents are potentially hazardous to the environment.However, technologies available to deal with AMD are either unsuitable or costly [3].Furthermore, practices are fairly exclusive and varying significantly from one site to another, which characterises several problems in terms of the implementation of available methodologies.
In Brazil, for instance, as a consequence of high contents of manganese in the soil, the concentration of this metal in AMD can be up to 150 times the limit of 1.0 mg/L recognised by CONAMA Resolution 430 (Brazilian legislation) [4].However, the majority of studies performed so far have addressed the removal of manganese from waters with low contamination such as drinking water (e.g., [5][6][7][8]).The management of AMD with exceptionally high concentration of manganese sets a totally new and challenging scenario within the remediation technologies and research on this topic is still necessary.
Overall, challenges in removing manganese from AMD arise from two main factors.First, the conditions of effluents, that is, pH and Eh, are unfavourable for manganese precipitation; second, AMD effluents are very complex in terms of chemical composition.Currently, the most widely employed treatment consists of active systems using chemical-neutralising agents such as sodium hydroxide and limestone to precipitate manganese [1,9].However, the need for high pH condition (i.e., pH > 11) raises expenses due to chemical consumption and yet leads to unsatisfactory treated waters considering the recommended pH range between 5 and 9 (CONAMA Resolution 430) [4].Furthermore, this process also generates large amounts of bulky sludge, which requires further treatment and appropriate disposal [1,10].
In this context, oxidative precipitation seems to be a more suitable alternative for removing manganese from AMD [11].Manganese reacts with an oxidant agent generating a colloidal precipitate, which is then separated by filtration or sedimentation [12].Several oxidant agents such as air [13], chlorine [14], ozone [15], sulphur dioxide, and oxygen [16] were evaluated in studies that focused mainly on slightly contaminated effluents.In general, potassium permanganate (KMnO 4 ) stands out with the advantage of removing other contaminants such as iron and organic compounds responsible for taste and odour [17].Therefore, there is great interest in establishing optimised operating conditions that enable the use of KMnO 4 for remediating highly contaminated AMD.
The assessment of reaction rates as well as solid phases formed during remediation procedures provide valuable information in designing and optimising efficient treatment methods.Van Benschoten et al. [12] published an extensive work on the kinetics of manganese oxidation by KMnO 4 .According to these authors oxidation rates depend on concentrations of oxidant, hydroxyl ions, manganese oxides and free manganese.Other factors such as coprecipitation, adsorption [18], and the autocatalytic effect of precipitates [19] were also reported to affect the removal of manganese and other contaminants, which indicates the importance of investigating the solid phases produced.
Geological models are particularly appropriate to assess solid phases in aqueous systems such as AMD.For instance, the software PHREEQC [20] has proved extremely useful in providing conceptual models focusing on water treatment and environmental remediation technologies [10,[21][22][23].Based on ionic associations, the program, amongst other options, allows saturation-index calculations and chemical speciation.PHREEQC employs a solubility approach to identify thermodynamically possible solid phases, which may potentially support the conception and management of efficient treatment systems for AMD.
In this study the oxidative precipitation of manganese by KMnO 4 was investigated with the objective of achieving an effective treatment method for highly contaminated AMD.Experiments were divided into two parts: the first one was performed with MnSO 4 solutions prepared in the laboratory and the second with AMD samples collected in an inactive uranium mine at the Plateau of Poc ¸os de Caldas, southeast of Brazil.Processes were conducted at pH 3.0, 5.0, and 7.0.Precipitates were characterised by spectroscopy techniques and electron microprobe.In addition, a geochemical model using the PHREEQC software was developed to predict solid phases possibly formed at each stage of the remediation process.
Methodology
2.1.Liquid Samples.Laboratory solutions were prepared by diluting 0.31 g of MnSO 4 ⋅H 2 O (Vetec, Brazil) in 1.0 L of deionized water.Acid mine drainage samples, identified as A1 and A2, were collected in two different water dams located in one former and inactive uranium mine.Effluents were chemically characterised and results are shown in Table 1.
Bench Experiments.
Experiments were carried out in glass beakers with 200 mL of liquid samples.Initially, under stirring and at room temperature (25 ∘ C ± 0.5), the pH was adjusted by adding sodium hydroxide or sulphuric acid.One aliquot was taken to determine the initial concentration of manganese which ranged from 90 to 105 mg/L.Afterwards, KMnO 4 4% (w/v) was added to the solutions based on predetermined ratios: 1.63 mg KMnO 4 /mg Mn for experiments at pH 3.0 and 1.54 mg KMnO 4 /mg Mn for pH 5.0 and 7.0.Samples were collected at pre-established time and filtered in 0.45 m membranes.Ratio and reaction time values were determined in a previous study based on statistic factorial design considering four parameters: concentration of the oxidant agent, reaction time, stoichiometry, and pH.Sulphuric acid solution and sodium bisulphite 5% (w/v) were added to the filtrate as quenching agents.Manganese, aluminium, calcium, iron, and zinc, were determined by atomic absorption spectrophotometry (VARIAN, model AA240FS).Uranium was assessed by X-ray fluorescence spectrophotometry (SHIMADZU model EDX-720).
PHREEQC Modelling.
The oxidative precipitation of manganese was modelled using the geochemical program PHREEQC-Version 2 [20].The consistent thermodynamic database WATEQ4F [24] was used to construct forward models focusing on the assessment of saturation index (SI).The developed speciation-solubility model comprised defined amounts of the KMnO 4 4% (w/v) allowed to react reversibly with aqueous solutions simulating the compositions of MnSO 4 and AMD samples.Based on the conditions of the system, each phase precipitated when SI > 0 was reached or dissolved completely when SI < 0 to achieve the equilibrium, SI = 0.It is worth saying that the condition of equilibrium, SI = 0, does not guarantee that the solid physical state will prevail, but taking into consideration that AMD effluents are highly concentrated, the precipitation is relatively favourable.The simulation was conducted in three steps: (i) simulation of initial KMnO 4 , MnSO 4 and AMD solutions, (ii) addition of potassium permanganate, and (iii) equilibrium between solid phases and generating solution.In addition to the phases identified by Raman spectroscopy, all thermodynamically possible phases were also considered. in Figure 1, where the region of low concentrations was highlighted.Laboratory solutions were firstly employed to evaluate oxidation rates of manganese without the interference of other contaminants.The equilibrium was reached within approximately 20 min of reaction, when manganese was almost completely removed.Comparatively, performances of experiments conducted at pH 5.0 and 7.0 were similar, with manganese contents being practically constant and below 0.1 mg/L after 10 min of reaction.On the other hand, at pH 3.0 the lowest manganese concentration was approximately 0.3 mg/L after 60 min of reaction.Nevertheless, all processes accomplished final concentrations of manganese lower than 1.0 mg/L [4], demonstrating that KMnO 4 doses and pH conditions chosen were appropriate.
Results and Discussion
As expected, owing to the reduced availability of reactants, oxidation rates initially high in all trials decreased over time.Overall, reactions at pH 5.0 and 7.0 were more efficient compared to pH 3.0.These results are consistent with those given by Van Benschoten et al. [12] reporting a trend of higher oxidation rates as the pH increases.These authors, however, have found significant differences between reactions at pH 5.5 and 7.0, which were not observed herein.Processes involving higher concentrations are more influenced by adsorption phenomenon due to the larger amount of precipitates.Indeed, those authors investigated initial concentrations approximately 100 times lower than those employed in the present study.
Overall, results demonstrated that in the absence of other contaminants there is directly proportional relation between the pH and the efficiency of processes in terms of manganese removal.Under conditions of high redox potential, Mn (IV) is thermodynamically favourable and predominant in aqueous systems containing manganese.As the Eh decreases slightly other compounds such as Mn 2 O 3 and Mn 3 O 4 can be also formed.However, all these phases might also develop even at lower Eh condition providing that the pH is raised as presented in the Figure 2. Therefore, the improved efficiency, that is, the greater precipitate development observed in higher pH, levels was expected and consistent with results found in the literature [6][7][8]12].Further investigation, however, is needed to identify chemical mechanisms to explain the dependence of the rate of solid-phase formation on pH of the media.
Characterization of Acids Effluents
. Table 1 presents the chemical composition of both mine waters identified as A1 and A2 as well as the limits for discharge of effluents set by the Brazilian legislation CONAMA Resolution 430 [4].Manganese, fluoride, and pH were outside the established limits [4].In addition, although the Brazilian legislation does not indicate any restriction, the levels of calcium, aluminium, and sulphate were particularly high compared to concentrations usually found in natural waters.The elevated concentration of these contaminants might result from the typical oxisol present in the mining region, which is extremely weathered and contains high aluminium, calcium, and fluoride levels.These elements are leached from piles of waste rock and tailings, where the generation of AMD takes place.High sulphate is a typical characteristic of AMD, product of the multistep reaction between the pyrite and the oxygen in aqueous media.
3.3.
Oxidative Precipitation with KMnO 4 .Figure 3 shows the variation of manganese contents in the effluents A1 and A2 throughout the oxidative precipitation by KMnO 4 in bench scale.Similarly to the results obtained for the MnSO 4 solutions, not only the reaction rates were higher but also the efficiency improved when the pH increased, offering a thermodynamically more favourable condition for the development of precipitates as previously described (Item 3.1).For instance, at pH 7.0, in the first 4 min of reaction, 99.7% and 98.3% of manganese were removed, respectively, from the effluents A1 and A2.While at pH 3.0, removals were 85.4% and 96.1%.Based on the established discharge limit of 1.0 mg/L [4], only processes at pH 7.0 were efficient.
Results also evidenced impacts of the presence of other contaminants in AMD on the overall efficiency of the process.In contrast with experiments conducted with MnSO 4 solutions, (Item 3.1), the process was not as much effective when applied for AMD samples.Unexpectedly, removals obtained at pH 5.0 were lower than those obtained at pH 3.0 and 7.0, and significant differences were observed between trials carried out at pH 5.0 and 7.0.Mechanisms by which other elements interfere in the oxidative precipitation by KMnO 4 require further investigation.Higher KMnO 4 consumption is possibly not the major source of the observed differences.Most of the elements were already in oxidised form, with the exception of the iron, originally found in divalent form, Fe 2+ at very low concentration in the AMD samples studied.
On the other hand, adsorption and coprecipitation were reported on the available literature as essential elements on aqueous systems containing manganese [12,25,26].Therefore, these factors may be also important in the scope of this research.It seems reasonable to suggest that the number of available sites for manganese adsorption decreases in the presence of other elements, since this process is neither preferential, nor particularly selective, towards manganese ions [25].
Adsorption also supports manganese removal in KMnO 4 ratios lower than the stoichiometric ones, considering the MnO 2 formation summarised in (1), that is, 1.92 mg de KMnO 4 /mg de Mn II [12,27].The amount of KMnO 4 used in the present study represents only 85% of the stoichiometry ratio.Hence, it is inferred that 15% of soluble Mn was removed by adsorption on manganese oxides surface instead of oxidative precipitation [25].In fact, the adsorption of unoxidised manganese (II) also depends on the pH condition as pointed out by Murray et al. [28].
Table 2 presents the concentration of other contaminants aluminium, calcium, fluorine, iron, manganese, zinc, and uranium in the effluents A1 and A2 at each step of the treatment procedure, that is, correction of the pH and oxidative precipitation by KMnO 4 .In fact, the oxidation at pH 7.0 had the advantage of being less subjected to the action of other components such as iron, zinc, and uranium, largely removed during the correction of the pH.
Characterisation of Solid Samples.
As experiments conducted at pH 7.0 effectively removed manganese from AMD and MnSO 4 solutions, precipitates produced in these trials were appropriately characterised.Solids were found poorly crystalline and Raman spectroscopy was selected to identify the phases.Figure 4 shows Raman spectra for precipitates (Figures 4(b), 4(c), and 4(d)) and also for a standard synthetic birnessite (Figure 4(a)).Although several points of surfaces were selected, no distinct absorption outlines were detected, indicating homogeneity of samples.By comparing the vibrational modes [29], presented in Figures 4(a) and 4(b) it is reasonable to assume that processes performed with MnSO 4 solution (Figure 4(b)) results mainly in Birnessite (MnO 2 ).Lovett [27] reported that the transitions of aqueous systems containing manganese to more oxidising conditions, that is, higher pH and Eh, facilitate formation of oxides.Among several possibilities (e.g., hausmannite, manganite, and pyrolusite) birnessite and feitknechtite forms are more likely to be formed.Birnessite was also found during the biological treatment of AMD by Tan et al. [21].
In contrast, significant alterations of intensity and possible overlays were verified for precipitates arising from the treatment of A1 and A2 effluents (Figures 4(c) and 4(d)).In Figure 4(c) lower intensity in 574.0 cm −1 and higher in 654.2 cm −1 possibly indicate hausmannite (Mn 3 O 4 ) [29], while higher intensity around 616.5 cm −1 in Figure 4(d) suggests manganite (MnOOH) [29].Lower definition and overlays (Figures 4(c) and 4(d)) evidence the impact of the presence of other elements on manganese oxides grid.Distinct structures were also revealed by scanning electron microscopy analysis.Solids formed from effluents (Figures 5(b) and 5(c)) are less porous when compared with precipitates from solution MnSO 4 (Figure 5(a)).The presence of other elements filling the structure of manganese oxides was confirmed in microanalysis by electron microprobe associated with EDS detector.Analyses identified other metals, such as aluminium, calcium, zinc, iron, uranium, and rare earths as well as fluoride and oxygen in solids generated during the treatment of the effluents A1 and A2.
3.5.Modeling with PHREEQC.PHREEQC software and the WATEQ4F database were employed to provide an enhanced understanding of the process, given the extensive chemical complexity of AMD effluents.The geochemical model developed included all thermodynamically possible phases and successfully simulated the treatment process.Evaluations focused exclusively on saturation index (SI) and phases were assumed to be formed when SI ≥ 0 [20].1.
Addition of KMnO 4 was characterised by large removals of calcium and manganese as birnessite, pyrolusite and nsutite (MnO 2 ), hausmannite (Mn 3 O 4 ), bixbyite (Mn 2 O 3 ), manganite (MnOOH), and fluorite (CaF 2 ).The final step simulated the prolonged contact between formed phases and resultant solution.Subsequent to the equilibrium, manganese was found mainly in oxidation state IV as birnessite, pyrolusite, and nsutite (MnO 2 ).This result was coherent with those presented by Hem and Lind [30] who studied models to predict the formation of manganese oxides in aqueous systems.According to these authors, in extended contact, the degree of oxidation of manganese increases by the disproportionation of phases, such as manganite and hausmannite.
Overall, it is worth mentioning that speciation-solubility models do not offer temporal and special information; that is, kinetics and distribution parameters were not considered.Therefore, although solid phases were thermodynamically possible, kinetically the development of determined phase might not be favourable.In fact, whether solid phases, for instance, goethite, UO 2 (OH) 2 and manganite, were actually formed or whether most of metals are incorporated into the manganese oxides phase is difficult to predict.
Conclusions
Manganese present in highly contaminated AMD was rapidly and effectively oxidised by KMnO 4 in a process enhanced at higher pH.At pH 7.0, the process removed 99% of manganese initially present, meeting the limit established by Brazilian legislation of 1.0 mg/L, within 20 min of reaction.Results evidenced that adsorption and coprecipitation are important factors in the removal of manganese and other contaminants such calcium and zinc.Consistent with instrumental analyses, the developed geochemical model using PHREEQC code successfully predicted the formation of birnessite (MnO 2 ), hausmannite (Mn 3 O 4 ), and manganite (MnOOH) as well as phases containing aluminium, calcium, iron, and uranium in the precipitates.Overall, although experiments were performed in a bench scale, results obtained support the oxidative precipitation by KMnO 4 as a potential treatment for heavily contaminated AMD.However, further investigation remains necessary to clarify chemical mechanisms involved in the control of the oxidation rates of manganese present in highly concentrated solutions as well as the influence of the pH in this process.
Figure 1 :
Figure 1: Effect of pH on manganese removal from laboratory solutions containing initial Mn (II) concentrations about 100 mg/L.Temperature, 25 ∘ C ± 0.5.
Table 1 :
Chemical characterization of acid mine drainage solutions.
3.1.Oxidation Rates.Data for MnSO 4 solutions prepared in laboratory including different pH values are presented Concentrations of all components are in mg/L.* CONAMA Resolution 430 has no limits for these components [4].
Table 2 :
Removal percentages of chief components from AMD effluents A1 and A2 over pH adjustment and oxidative precipitation with KMnO 4 .
* Removals percentages are related to initial concentrations presented in Table | 4,400.2 | 2013-12-25T00:00:00.000 | [
"Geology"
] |
Investigation of Vegetable Market Integration System in Dhaka City: A Study on Effective Supply Value Chain Analysis
Vegetable cultivation in our country is growing day by day. We are now planting vegetables in our cultivable land area of 2.63 percent. Vegetables benefit farmers even more than other crops. It can play a vital role in improving the nutritional status of the chronically malnourished population of Bangladesh. There are various types of agricultural markets in Bangladesh through which agricultural products are traded. These are rural primary markets, rural assembly markets, rural secondary markets, and urban retail market. Before reaching the customers, vegetables are sold to wholesalers and retailers in Bangladesh. There is effectively a complete shortage of sophisticated vegetable handling equipment and facilities on the markets. Sorting, displaying, and selling are often performed from and into baskets at the ground level. Though Bangladeshi fruits and vegetables are exported to about 38 market destinations, the key buyers are primarily located in two regions: the United Kingdom and the Middle East. Bangladesh mostly exports fresh fruits and vegetables. However, during recent years export of processed as well as frozen vegetables had underway on a limited scale. The regular supply chain is for intermediaries to collect orders from exporters, go to production areas, collect crops from farmers/local markets, and arrange to deliver the same to exporters on the day of shipment. Owing to the opportunistic actions of sellers and consumers, the marketing cost rises, each seeking to take advantage of the other by means such as adulteration of the goods, cheating on weights and measures and violating distribution contracts. In Bangladesh, the vegetable marketing system is challenging, awkward, and unorganized and needs to be established for the well-being of common citizens.
sanitation (UNEP, 2015). It is currently estimated that some 12 percent of the urban population of Dhaka city live in informal low-income settlements (Hossain, 2011).
Rapid population growth and urbanization are also putting pressure on adjacent agricultural lands, water bodies, forest areas, and wetlands. Risks associated with Dhaka's unplanned urban growth are being further compounded by rapid industrialization and inadequate infrastructure investments, especially in transport (Hossain et al., 2019). Rising air and water pollution caused by traffic congestion and industrial waste are serious problems affecting public health and the city's quality of life. This led the Economist in 2015to rate Dhaka as the second least livable city in the world. Dhaka is also prone to serial disasters associated with flooding, water logging and related problems. Unplanned urbanization and the lack of coordination between government agencies are responsible for urban encroachment into the wetlands and the poor maintenance of the canal system. Agricultural land in Dhaka has a significant impact on flood control. Most of the agricultural lands are deluged and can retain water during monsoon (Rahman, 2016). However, the filling of the wetlands with sand and earth and the conversion of agricultural land to urban uses enhances the risk of flooding for the city, which impacts negatively upon both the availability and the affordability of fresh food. Despite massive immigration, it is apparent that the supply of food has increased in parallel with the accelerating rates of urban transformation, largely due to the inherent informality of the food marketing system (Bohle et al. 2010). However, the number of people who remain food insecure is an on-going concern. More than one quarter (28%) of Dhaka's population live in severe poverty and are undernourished (World Bank, 2007).
According to the Bangladesh Urban Health Survey (2013), as many as 50 percent of the children in urban slums were stunted, in comparison with 33 percent among the residents of non-slum urban areas. According to Quasem, (2013), low-income households in Dhaka must often buy low quality food. Besides, new problems are emerging in the wake of urbanization, which although not unique to urban life, are especially relevant to it. These include: (a) poor food safety; (b) increasing obesity; and (c) the increasing difficulty of combining the pursuit of work outside the home with care giving, which is essential for the nutritional well-being of children (WFP, 2016). Chronic food insecurity is thus a reality for a large proportion of the urban population in the city of Dhaka. The resilience of the food system of Dhaka, in terms of food utilization, is difficult to judge as people's food consumption is not only driven by the availability and accessibility of food, but also by social discourses about food and by people's own needs and desires (Bohle et al., 2010). Most of the transactions in Dhaka's food system are informal as is the governance of wholesale food markets and street food vendors. Policymakers and urban planners need to take into account that informality is an inherent part of a functional and resilient urban food system (Quddus, 2013).
According to Bohle et al. (2010), the city of Dhaka is well supplied with food, as the availability of staples is well above the national average. Every day, over 9,000 tons of rice, fish, pulses, spices, fruit, vegetables, edible oil, meat, eggs, and wheat are brought into Dhaka and distributed through the marketing system. However, while food might be available, it is unlikely to be affordable to all income groups. Unnayan Onneshan, (2004) reported that food prices are increasing due to extortion, the high cost of transport, high lending rates for credit, and the presence of marketing cartels. Regrettably, it seems, extortion and exploitation are spreading among toll collectors and members of the law enforcement agencies.
Saleh, (2014) attribute the major causes of food price inflation to man related causes, process-related causes, material related causes, transportation-related causes, and environment related causes. There is an immediate need for identifies entry points for policies that build on rural-urban interdependencies and synergies to foster an enabling environment for smallholder farmers to participate more equitably in food chains whilst simultaneously encouraging consumers to make more informed food choices. This study has been analysed the vegetable value chain to identify different impediments and opportunities to improve the entire value chain performance in Dhaka city. Besides, this study will support to develop of an appropriate food agenda for the city of Dhaka by building the urban food planning capacity and will explore the policy options to improve access to and the distribution of safe, heal-thy, and nutritious food, reduce urban food waste and encourage consumers at all levels of society to make more informed food choices. From the above viewpoints, the present study attempts to depict the following objectives: To review the production level and consumption rate of vegetables in Bangladesh; To review the possible impediments of vegetable production to distribution; To highlight the different marketing system of vegetables in Bangladesh.
MATERIALS AND METHODS:
This paper is based on the value chain approach, focusing on intermediary activities, products, and cost flow. This study is conducted in peri-urban areas (Keranigonj, Gazipur, Savar and Manikgonj) near Dhaka city and Dhaka city (Karwan Bazar, Mohammadpur Katcha Bazar, Mirpur-1 Katcha Bazar, Uttara, and Banani Residential Areas) for the vegetable supply chain. The value chain study follows both quantitative and qualitative methods for data collection. The study is fully participatory through ensuring maximum involvement of farmers, wholesalers, retailers, street vendors, and restaurants. However, the research paper is considered a mix of an in-depth interview, focus group discussion, projective technique, and observation technique. The necessary steps required to perform the case study are presented below.
Primary investigation -Considering the availability and huge production of vegetables in peri-urban areas are selected as the study area. Primarily vegetable supply chain from these peri-urban areas towards Dhaka is set as an investigation chain to develop a general understanding of the opportunities on the research objectives. To gain sufficient knowledge on value chain analysis and vegetable supply chain analysis, a few books, articles, research papers, and project reports are collected and studied. Preparing the primary questionnaire -After a primary investigation and literature review, a set of a primary questionnaire is prepared for farmers, wholesalers, retailers, street vendors, and restaurants. Primary tools and techniques for analysis are also selected.
Literature review -
Investigation through the literature -A field visit is conducted after preparing the primary questionnaire, and this questionnaire is based on the value chain analysis technique and research objectives.
Preparing the final questionnaire -The final questionnaire is prepared after an investigation. During this phase, the final tools for field investigation and interviews with the key informants and market actors are designed. The final questionnaire, tools, and technique are then used for the study and analysis.
Data collection -Finally, data are collected through observation and questionnaire. Data are collected from both primary and secondary sources. Primary data are collected directly from an interview and group discussion of respondents. Secondary data are collected from the Bureau of Statistics (BBS), DAE, newspaper and internet files, etc. In analysing vegetable supply chains in Dhaka, four focus group discussions (FGDs) were conducted with 24 vegetable farmers in the peri-urban districts of Gazipur, Keranigonj, Manikgonj, and Savar ( Table 1). Farmers are the producer of the vegetable. They produce different types of vegetables and bring their product to sell in their local market. The amount and types of vegetables differ from season to season.
Wholesalers
The wholesalers (locally known as many names like paikers, bepari, or mokami) play a significant role in the vegetable marketing system.
Retailers
Retailers are directly linked to the consumer. They purchase their product from the wholesale market and sell it to the consumer. Face-to-face executive interviews were subsequently conducted with wholesalers (16), retailers (50), street vendors (19), and restaurants (28) in the city of Dhaka. Given the need to gather specific information of prices and volumes, the seasonality of supply, storage, and product wastage, the study focused on three products: (i) spinach (pui-shak)-a green leafy vegetable; (ii) potato-a root crop; and (iii) eggplant (Brinjal)-a fruit crop.
Data processing and analysis -The informational data collected are sorted and arranged so that further study and analysis could be performed. Quantitative data are arranged by using graphs and tables. Various types of information are given as a profile. Data are analyzed by using pie charts, cause-effect analysis, flow charts, process tree, etc. The data that have been obtained by interviews, questionnaires, and observations are structured in an order. After completion of the data processing, the analysis has been performed.
RESULTS AND DISCUSSION:
Present Supply Value Chain Map -The value chain map shows the movement of the vegetable product among the supply chain and identifies the actors and their activities. In the present supply chain, illustrates in Fig 1 identify the major channel of the vegetable supply chain in Bangladesh. The value chain starts with the producer and end at consumers. From producer to consumer, the product follows a lengthy market channel. Different market actors are known as market intermediaries or stakeholders involve in this value chain. Other market intermediaries are the local wholesaler, divisional wholesaler, the regional wholesaler, and retailer.
The two main problems that farmers faced in growing fresh vegetables in the peri-urban areas were the non-availability of labour (75%) and the fluctuation in output prices (a sudden price drop) (75%), ( Table 2). As per the discussions with farmers through focus group discussions (FGDs), most vegetable growers reported a scarcity of labour for the cultivation, harvesting, and post-harvest processing (sorting, grading, and packing) of their vegetable crops. As wages for agricultural labourers were very low, the resource had been diverted to other industrial sectors. According to Islam and Uchiyama, (2012), vegetable production requires at least 50 percent more labour than cereal production.
Besides, agricultural work is not constant all year round as it is dependent on the seasonality of vegetable cultivation. For the few available workers, the cost of labour was rising. Islam and Uchiyama, (2012) have similarly reported that the high cost of labour was an issue in vegetable cultivation, for labour was often the single highest cost item (33.6% of total costs). Table 2: Constraints faced by the farmers in commercial cultivation of vegetable in pre-urban areas The main two problems the farmers faced for growing the vegetables in peri-urban areas were high labour cost (75%) and price fluctuation (Sudden price drop) (75%). As per the discussion with the farmers' group through FGD, most of the growers in those areas said labour was scarce during the vegetable cultivation, harvesting, processing, and postharvesting time. Nowadays most of the labour is diverted from their agricultural occupation to other sectors due to having nominal wages in agricultural works. According to Hasan et al. (2012), vegetable production requires at least 50% more human labour compared with cereal production (which among the concerns mentioned above will, on the other hand, obviously increase farm-labour jobs). Besides, the agricultural works are not fixed round the year as it is depending on the seasonality of vegetable cultivation, hence bound them to divert their main occupation to another profession. Although few workers are available to work during the vegetable growing period, their labour cost is higher than other professions in that particular time. The finding also supports the study of Islam et al. (2012) that the high cost of labour was one of the issues in vegetable cultivation and it was the single highest-cost item, at 33.61% of the total cost.
A sudden drop in vegetable price is another constraint of farmers as the prices of vegetables fluctuated very frequently in the market. Farmers are sometimes baffled when they would be harvesting of their products for selling. The production cost was about three times higher in vegetables than in cereals but when the farmers were gone to sell their products in the market; the market price is dramatically dropped due to lack of proper price monitoring capacity by the government. Farmers cannot anticipate future market prices at the time of planting and have only a limited ability to postpone sales when prices are unfavourable, as most perish-able vegetables cannot store for more than a few days. They were not got the right price value of their products at that time and incurred a huge amount of losses as they were bound to sell all of their perishable vegetables instantly, otherwise, it would be damaged or spoiled. Moreover, there was no adequate cold storage to store these perishable vegetables for a longer period. As per the finding of Islam et al. (2012), there is wide seasonal fluctuation in the price of vegetables, and price variation is one of the main risks' vegetable growers face.
The second most constrain faced by the farmers was unavailability to get agricultural loans from the government or commercial bank in the cultivation period due to lack of knowledge for loan sanction (50%). Most of the farmers in Bangladesh are uneducated and due to not having the proper knowledge to understand the entire process of loan Need to pay to local leaders to sell products in roadside 25
Attack of insects 25
No facilities for storage of vegetables sanction procedure from the governmental or commercial banks, they had to manage the loan through few middlemen in the bank office or other persons of outsides. To get the loan on time, they had to provide a certain amount of bribe or commission to these middle men; otherwise, they were deprived to get the loan. Besides, another problem faced by the growers is the low price of products round the year (50%) and unavailability of agricultural inputs like fertilizer, manure, insecticide, pesticide, irrigation, and so forth (50%).
The BADC (Bangladesh Agriculture Development Corporation) provides the necessary agricultural inputs to the farmers which quality is better than other sources but the quantity of these inputs was too nominal to distribute to all of the farmers. In this circumstance, they had to manage their inputs from other sources like the private company where the price of these inputs was higher than BADC. However, 50% of farmers are not happy with the quality of these agricultural inputs collected from a private company as in most cases, the seeds are mixed with inner material or extraneous matter which are in low quality to germinate. Moreover, 50% of farmers are affected by a flash flood during the time of monsoon. At that time, the cultivable land for vegetables was inundated with floodwater in low lying areas in Bangladesh. Farmers are not cultivated their land and incurred a huge loss.
The data also supported another one qualitative study in Kalatia Village at Keranijong conducted by Islam and et al. (2012) where they found the main problems for vegetable production in Kalatia are related to flooding and flood-waters could damage or destroy their entire crop. Another one issue of the farmers is the high interest in the loan (50%) which was a full burden to manage these high amounts rate end of the year. There are others constraints faced by the growers, sudden rain in monsoon (25%), the higher price of agricultural inputs (25%), not getting expected vegetable price from the customers (25%), no separate vegetable market (25%), the infestation of insects and pest (25%), etc (Temesgen, 2020).
Wholesalers, Street Vendors, and Restaurants -
All wholesalers, street vendors, and restaurants purchased fresh vegetables every day. However, some 2% of retailers purchased 2-3 times a week and 2% purchased only 2-3 times per month ( Table 3). Table 4).
Wholesaler's average purchasing (21546 kg) and selling (24445 kg) amount of aggregated vegetables in a week is higher rather than other actors, though the sold quantity of vegetables shows higher than the purchased quantity because the respondents were declared the sold quantities with an opening balance of the previous day. There was a 2% loss of total vegetables per week when it was moved to the retailer's hand and a 6% loss has occurred in the case of street vendors. It was deemed that most of the cases, street vendors have not enough shelter to protect their perishable vegetable from drying up during carrying of their products in the road or street and have not available storage facilities to keep retain of their unsold vegetables for the next day. So, normally they discarded or used these unsold vegetables for their purpose end of the day. On the other hand, as the restaurant used all of their purchased vegetables on the same day for restaurant purposes, hence there were no losses.
In the case of potato, there were no losses found form each of the actors while a similar situation was observed in the context of wholesaler and retailer like the aforementioned of aggregated vegetable, the sold amount per week is higher than the amount of purchase due to considering the previous day's opening balance. On the other hand, street vendors and restaurants are sold all amount of their products on the same day, hence there is no loss.
On the contrary, in the case of brinjal, the losses amount per week is higher than potato while the number of losses by a wholesaler, retailer, and the street vendor is 4%, 5%, and 1% respectively. In addition, the retailer's losses also higher than that of wholesalers because they were selling all of their brinjals on the same day and not kept leftovers for the next day. But the retailer was not sold all of their brinjals in the same day and kept unsold brinjal in storage where they piled up it haphazardly. A similar situation was observed in the case of spinach where 3% of losses were happening by retailers and there were no losses in street vendors and retailers. The average purchasing price of aggregated vegetables by wholesalers was 26 tk/kg and the sold price was 28 tk/kg, where the sales margin is 8%. But on the contrary, the sales margin was dramatically increased (33%) when aggregated vegetables handed over to the retailer from the wholesaler. Besides, the same activity has happened in the context of street vendor people as well while their gross profit or sales margin almost the same as the retailer (33%).
However, restaurant people purchased the total vegetable in 70 tk/kg but there was no value data of sales as they used all aggregated vegetables for their business consumption, it could not possible to show the gross profit margin in their end. In the case of potato, the sales margin of the wholesaler is 18% but when the products were handed over to the retailer and street vendor, the sales margin was gone up 25% and 25% respectively. A restaurant has no value data of sales; it was not possible to figure out the sales margin of their end.
In the context of brinjal, the highest sales margin (34%) was achieved by the street vendor where the sales margin of wholesaler and retailer was 10% and 23% respectively. The overall sales margin is increased step by step moving product from wholesaler to a street vendor, though both street vendors and retailers have purchased brinjal from the same or different wholesaler. In the case of spinach, the retailer sales margin (30%) is higher than that of wholesaler (20%) and street vendor (27%), though there was not value data of spinach sales, hence could not possible to figure out the sales margin from their end.
The net profit of wholesalers is 2.75% by selling all types of vegetables where the net profit of retailer and street vendor is 14.87% and 11.90% respectively which are four times higher than that of wholesalers ( Table 6). Normally, most of the cases vegetables are delivered (80%) to the wholesaler in Dhaka city from different traders or farmers of rural or pre-urban areas in Bangladesh where the average delivery distance is 63 km, though the standard deviation is higher than the average because few vegetables were delivered or collected form shorter distance where some of were from long distance. On the other hand, all of the retailers were collected the vegetables (100%) from different wholesalers in Dhaka city where the average distance was 4 km. Similarly, a street vendor was collected from their vegetables from the same wholesalers wherefrom the retailer was purchased or from the different wholesalers. Restaurants are also collected all of their daily vegetables either from a retailer or wholesaler nearer to their restaurant ( Table 8).
The finding shows that wholesalers are normally used trucks (62%) to carry their vegetables for bringing in Dhaka city from a long-distance followed by pick up (38%). As they brought the product from a far distance, there is no chance to use other nonmotorized vehicles at all. On the contrary, retailers were preferred to pick up (33%), rickshaw (23%), van (22%) for the long-distance, and head load (18%) for the short distance. Conversely, street vendors are normally selling fresh vegetables with their van in Dhaka city; hence, they do not need any other support to carry the products. So, most of the time they used their van for carrying vegetables (74%). Rickshaw (11%) and head load (10%) another preference to carry the products within a short distance. The respondents of the restaurant are fully dependent on the van (100%) as they normally carry all types of commodities at a time for their purpose of restaurant. Van is more suitable rather than other transport in the case of a restaurant. All the respondents of Wholesalers, retailers, street vendors & restaurants responded that the quantity supplied to the market is not consistent (100%) all the year-round. All of the respondents said that vegetables were most plentiful in winter and low in autumn. Fortuitously, the high demand season for vegetables was also in winter. Table 10).
Most of the wholesaler respondents said that they had selected the supplier for vegetables based on low price and cost (40%) and quality of the vegetable (40%) followed by varieties of products (20%) and its size (20%). On the other hand, the majority of retailers were stated that they had preferred the low price (90%) of vegetables when they dealing with suppliers followed by its quality (44%), variety (33%), and freshness (13%). Besides, 90% of the street vendors were given priority of low cost and price along with quality (74%). In respect of restaurants, 50% of respondents were preferred to deal with vegetable wholesalers or retailers for low cost, and 50% were given priority on quality (Table 11).
Most of the actors involved in the vegetable trade normally transacted with cash when they purchased or sold. Street vendors and restaurants only transacted in cash as the number of vegetables they have purchased and sold was low quantity rather than that of wholesalers and retailers. Conversely, wholesalers and retailers had to purchase or sell a big amount value of vegetables rather than other actors, hence 6% of wholesalers and 10% of retailers are needed to transact with credit ( Table 12). These findings are supported by Ahmed, (2014) who found that vegetable retailers have to buy their product in cash. They seldom get any credit from the wholesaler.
In transporting fresh produce, most of the wholesalers experienced some problems with traffic jams (73%) on the road, followed by the long delivery time (27%). They also faced difficulties in terms of traffic accidents (9%) and the nonavailability of transport (9%), ( Table 13). The finding also supports the study of Hassan, (2013), that the main problems in transportation were related to the lack of farm roads, broken and uneven roads and highways, lack of coordination in transport agencies, high damages during transportation, and slow movement in transportation due to traffic congestions. Many of the retailers (23%) experienced problems associated with police bribery, followed by traffic jams (20%), road accidents, and the high cost of transport (13%). They also experienced problems with the long delivery time (8%). Ahmed (2014) revealed that retailers and street vendors were struggling to carry their products from the wholesale market to the retail market because they did not have any permanent transport service. Street vendors often experienced traffic jams (18%) and road accidents (18%) as most of them were transporting the vegetables in their cart or rickshaw. They also experienced problems every day in the restriction of vehicle movements on some roads in Dhaka. In some residential areas, the respective housing society allowed nominated street vendors to conduct their business, but they had to abide by the rules imposed by the housing society and needed to display notice of approval when they moved into the restricted area. Some 6% of respondents said they were sometimes harassed with different rules and regulations imposed by the government, vegetables drying up due to open transport (6%), managing a shared vehicle with other vendors (6%), distance to the wholesale market (6%) and the lack of transport other than rickshaws for carrying small quantities of vegetables (6%).
Restaurant owners stated that they faced the problem of high transport cost (38%) to carry the products from the market to their place of business, followed by acute traffic jam (31%), long delivery time (15%), and the lack of transport (23%). 56% wholesaler has no storage capacity in their end as most of the time they sold vegetables to other actors on the same day, whereas 44% of respondents were said they have storage facilities. As all of the retailers need to store the products for selling it the next day, all of them have adequate storage facilities. In the context of the street vendor, 60% were said they have storage to keep the leftover vegetable but all of the restaurants said that they have no adequate storage facilities as they used all of the vegetables on the same day for restaurant purposes (Table 14). Post-harvest losses for vegetables in Bangladesh are estimated to be 20 to 25%. For highly perishable fruit and vegetables, these losses may be as high as 40% (Badruddoza and Rolle, 2006). Lack of appropriate storage facilities is seen as a factor that contributes to these losses (Hassan et al., 2010). Storage facilities should be located at each of the loading and unloading points, as well as in the wholesale markets. The absence of such facilities was identified as a critical problem in the present marketing system. For the wholesalers, the main problems they experienced in their transactions with suppliers were damage due to overloading (50%). The second and third most frequently cited reasons for the damage to vegetables was inadequate knowledge of loading and unloading (25%) and insufficient storage facilities (25%). Others attribute leading to product damage included high temperature (12%) and the lack of labour (12%), (Table 15). The findings show that all retailers (100%) stored the products before selling to others, whereas only 44% of wholesalers and 60% of street vendors stored the products. On the contrary, restaurants did not store any vegetables: they purchased what they needed for each day. Where necessary, wholesalers generally stored the vegetables for no more than one day (an average of 124 kg), whereas retailers often stored for 1 to 1.5 days (an average of 75 kg), and street vendors stored for 1 day (an average of 12 kg).
Though the storage duration for retailers and street vendor was similar (1 day), due to the lack of appropriate storage facilities, the postharvest losses for street vendors (15%) were much higher than for the retailer (5.33%) and wholesaler (4.83%). Whereas no wholesalers expressed any concerns about food safety, only 30% of retailers, 50% of street vendors, and 20% of restaurant expressed a similar opinion ( Table 16). Overloading of vegetables (41%) is considered a critical issue for retailers, followed by poor packaging (31%). Improper loading and unloading (28%) also contributed to the damage and thus the deterioration in quality. In addition, 9% of retailers stated that poor quality was the result of poor grading at the farm gate. Street vendors (60%) and restaurants (57%) were seriously affected by the low-quality products they purchased from wholesalers and traders. Street vendors suffered from product damage due to improper loading and unloading (40%), followed by overloading (20%) and poor packing (20%). For the restaurants, the poor product quality was attributed to problems associated with improper loading and unloading (21%), damaged due to poor transportation system (21%), and rotten produce Singh and Chandha, (1990) reported that 25-40% of vegetable loss occurs due to rough prepackaging and improper postharvest handling, transportation, and storage practice. Sharma (1987) showed that postharvest losses of vegetables in Bangladesh could be as high as 43%. Retailers (30%), street vendors (25%), and restaurants (20%) all expressed their concerns about the potential use of formalin. Chemical contamination was another food safety concern raised among the retailers (20%) and street vendors (25%). Restaurants were concerned about the low quality of vegetables (40%) and the chemicals used (40%). As street vendors often sprayed the products with water to retain their freshness, some issues were expressed about the use of non-potable water and the risk of contamination. No respondents in the vegetable supply chain had received any formal training in food safety management ( Table 18).
All of the respondents under wholesale, retail, street hawker, and restaurant are not got any formal training on food safety or any relevant topics of food safety (Table 19). was rainwater in the rainy season. In conducting their business, wholesalers generally employed 3 full-time staff and as many as 5 part-time (or temporary) employees. While the retail sector generally only employed I full-time staff member and 1 part-time staff member, the restaurants employed as many as two full-time staff and up to 10 temporary employees ( Table 20). Most of the respondents (49%) involved in vegetable trading were aware of the weight and measurement act under BSTI where each business person was responsible for the correct calibration of their measuring balance.
The need for a trading license (26%) was the second most frequently cited regulation, followed by market price monitoring law or price control law (19%). While several respondents were aware of the legislation banning the use of polythene bags (7%), they continue to be used. Waste cleaning regulation in the market place was cited by another 7% of respondents (Table 21). Most of the respondents (49%) involved in the vegetable business are fully aware of the weight and measurement act under BSTI where there is binding for each of the business persons to calibrate their measuring balance in a certain time for delivering the accurate weight of the products. Trade license (26%) is the second-ranking regulation among the respondents followed by market price monitoring law or price control law (19%). Few of them are also conscious about others implemented laws in Bangladesh like the uses of polythene bag (7%), waste cleaning regulation from the market place (7%), though they're a good amount respondent who is not aware of laws related with vegetable business (16%).
Moreover, there is a good trend was observed among the vegetable business person regarding the food safety rules and regulations like food safety law (2%), formalin control act (2%), food contamination (2%), control of chemical uses in vegetables and pesticide (2%), (Table 22).
CONCLUSION:
Bangladesh is an agricultural country with a high potential to become a major fresh vegetable producer and exporter. The country, however, has failed to become an effective producer of these items. There is a huge worldwide demand for vegetables, and their consumption has increased due to proven health benefits. The vegetable traders' views have discussed the fact that insufficient infrastructure, a lack of training among growers and traders, and an informal and uncontrolled market are the reasons behind the failure. Policymakers should pay attention to the expansion of the country's vegetable production for local consumption and the export market. Several African studies have shown that small-scale vegetable farms can boost the financial condition of poor urban and rural households. The vegetable supply chain in the country should therefore be reorganized to meet the country's population's global standard of health and hygiene, poverty reduction, and export earnings.
Based on the findings of the study, the following conclusions are drawn- Vegetable production is in increasing trend but consumption percentage is not satisfactory in comparison to other countries. So, daunting task are ahead to achieve the targets. The marketing system is problematic and unorganized in Bangladesh. Adequate parking facilities in production areas or near the airport and cold storage facilities at the airport will be necessary before the business of fresh perishable produce export can be expanded. | 7,989.8 | 2020-11-04T00:00:00.000 | [
"Agricultural and Food Sciences",
"Economics",
"Business"
] |
Energy for Sustainable Development: The Energy– Poverty–Climate Nexus
Worldwide, 1.4 billion people lack access to electricity, and 2.7 billion people rely on tra‐ ditional biomass for cooking. Most people living in energy poverty—without electricity access and/or using traditional biomass for cooking—are from rural areas of Sub-Saharan Africa, India, and other developing Asian countries (excluding China). At the same time, the poorest people are the most likely to suffer from the impacts of climate change. Fortunately, innovative, sustainable energy technologies can allow developing countries to leapfrog to low-carbon renewable energy, while at the same time alleviating extreme poverty. Increasing energy access, alleviating rural poverty, and reducing greenhouse gas emissions can thus be complementary, their overlap defining an energy–poverty–climate nexus. Transitioning to more efficient low-carbon energy systems in rural areas can gen‐ erate greater returns than similar efforts in industrialized areas. Accordingly, this chapter provides an overview of: a) The linked problems facing devel‐ oping countries of energy access, poverty, and climate change, and how these problems interact and compound each other. b) Potential renewable energy solutions, including off-grid solar, wind, clean biomass, micro-hydro, and hybrid systems. For each energy option, benefits and challenges will be discussed, along with examples of successful small-scale use in rural areas of developing countries.
Introduction
Most of the world's people without access to electricity or clean energy are from rural areas of developing countries. At the same time, a person in a developing country is 79 times more the majority in Sub-Saharan Africa, India, and other developing Asian countries (excluding China) [2]. At current growth rates, about half a billion "energy poor" will be added over the next 20 years [1]. [2] Improving access to cost-effective, sustainable energy technologies is critical for addressing poverty in developing countries [1]. Although improving energy access is not one of the 8 globally agreed Millennium Development Goals (MDGs), it is a cross-cutting issue that directly impacts achievement of the goals [1]. More and better energy services are needed to end poverty, hunger, educational disparity between boys and girls, the marginalization of women, major disease and health service deficits, as well as environmental degradation [1,2]. Without modern energy services, basic social goods such as health care and education are more costly in both real and human terms, and economic development is harder to perpetuate [1]. A clear correlation exists between energy and the Human Development Index (HDI) [3]. As the International Energy Agency and United Nations state, "Access to modern forms of energy is essential for the provision of clean water, sanitation and healthcare and provides great benefits to development through the provision of reliable and efficient lighting, heating, cooking, mechanical power, transport and telecommunication services" [2].
For the 1.4 billion people that lack access to electricity, they either live in locations too remote to be connected or cannot afford the fee to connect [4]. For locations that are off-grid, fossil fuels are often unaffordable due to the cost of delivery to remote locations [5].
The Energy-Climate Nexus
The climate change problem is largely a fossil fuel problem. According to the Intergovernmental Panel on Climate Change (IPCC) 2007 report, at least 57% of greenhouse gas emissions globally stem from burning of fossil fuels [6]. For carbon dioxide (CO 2 ), the most important anthropogenic greenhouse gas, 74% of emissions are due to combustion of fossil fuels [6]. When fossil fuels are burned for energy, the carbon stored in them, originally from biomass such as algae, is emitted almost entirely as CO 2 . These fossil fuels include coal, oil, and natural gas, which are burned in electric power plants, automobiles, industrial facilities, and other sources.
The Poverty-Climate Nexus
The poorest people in the world are the most likely to suffer from impacts of climate change.
According to the United Nations Development Programme (UNDP), a person in a developing country is 79 times more likely to suffer from a climate-related disaster than a person in a developed country [2]. The poor are especially vulnerable to the impacts of climate change, including reduced agricultural productivity and increased food insecurity; heightened water stress and insecurity; rising sea levels and increased exposure to climate disasters; loss of ecosystems and biodiversity; and amplified health risks [1]. According to the Human Development Report (HDR) 2007-2008, failure to address climate change will consign and trap the poorest 40% of the world's population, some 2.6 billion people, in downward spirals of deprivation [7]. Providing energy access will help poor areas adapt in the face of a changing climate [8].
Reductions in greenhouse gas emissions must include both developed and developing countries, including those with significant numbers of people living in poverty. According to the U.S. Environmental Protection Agency (EPA), greenhouse gas emissions from developing countries are expected to exceed those from developed countries in 2015 [9]. According to Casillas and Kammen [8], every dollar spent on the transition to more efficient low-carbon energy systems in rural areas has the potential to produce greater carbon mitigation returns than in more industrialized areas.
Potential Solutions
Since the problems of energy access, poverty, and climate change are interrelated in developing countries, solutions can be designed to solve all 3 problems simultaneously. Innovative sustainable energy technologies can allow developing countries to leapfrog to low-carbon renewable energy, while at the same time alleviating extreme poverty. Increasing energy access, alleviating poverty, and reducing greenhouse gas emissions can thus be complementary, their overlap defining an energy-poverty-climate nexus solution.
As mentioned above, according to the article published by Casillas and Kammen [8] recently in Science, transitioning to more efficient low-carbon energy systems in rural areas can generate greater returns than similar efforts in industrialized areas. Urban cities of developing countries may have access to electricity for lighting. Rural areas typically lack access altogether; hence, the need in these areas is the greatest [9]. Access to electricity in rural areas, even at modest consumption levels, can dramatically improve a community's quality of life. For example, electric lamps can allow children to study at night, and radios and cellular phones can greatly improve communication pathways [10]. This section will accordingly focus on renewable energy solutions for rural areas of developing countries.
Centralized electrification requires massive amounts of capital [10]. The dispersed nature of houses and low potential demand create little incentive for power companies to provide access to rural areas. In addition, extending the grid may be unrealistic due to transmission line costs or hard terrain [5]. Thus, in rural areas, off-grid and mini-grid solutions make the most sense. Such systems can consist of a single home or several small homes and businesses. The systems can be incremental and scalable and applied to many different conditions and environments [10]. Off-grid and mini-grid options for renewable electricity include solar, wind, clean biomass, and micro-hydro. These options for renewable power will be discussed in more detail below.
Solar Power
Solar energy is abundant in many locations in the developing world [5]. Many regard it as the most promising renewable source for developing countries [5,11,12]. The use of solar energy produces no on-site air pollutants [although pollutants are typically generated in the process of manufacturing photovoltaics (PV) cells].
Solar energy can be utilized in two ways: direct heat energy for various purposes (heating water, heating space) and direct current electricity generation using PV system. Electrical energy can be used immediately to pump water for irrigation or for refrigeration, lighting, or other purposes; alternatively, it can be stored in a rechargeable battery for later use [5]. This can help solve problems associated with its intermittency [12].
Individual houses can have their own PV system for lighting and small appliances, such as radio and mobile phone charging. A village can benefit from a larger PV system, with a microgrid structure.
Solar Challenges
The primary barrier to widespread implementation of the solar PV technology is its cost, due to the high cost of the silicon base material and associated manufacturing processes [1]. In addition, production of solar cells currently requires sophisticated and expensive manufacturing facilities and highly trained personnel, which may not be available in developing countries. Nicole Kuepper, a Ph.D. student, won the Eureka Prize for Young Leaders in Environmental Issues and Climate Change for developing a simple, inexpensive way of producing solar cells in a pizza oven. The process uses a low-cost inkjet printing process, aluminum spray, and a low-temperature pizza oven, meaning that the solar cells can be made without high-tech environments or high-cost inputs [1].
Solar water heaters (SWHs) are relatively expensive to install ($500-$2100), although the initial investment can be recovered through future electricity savings [1]. For many families in rural areas, the purchase of a solar lighting set, even for lighting service only, is so hard that a systematic approach has to be designed, which enables them to pay only for the lighting service received instead of owning the whole hardware [5].
Solar Success Stories
Solar lanterns. Typically, solar-powered lanterns use solar energy to charge a battery that powers a solid-state light-emitting diode (LED), the most efficient lighting technology on the market. One solar lantern, the Mighty Light, costs around US$ 45 and lasts up to 30 years. It has replaced polluting dangerous kerosene lamps for thousands of households in Afghanistan, Guatemala, India, Pakistan, and Rwanda [1]. As of November 2010, around 9000 versions of a solar lantern called the "solar tuki" had been installed in Nepal [1].
Solar home systems. A solar home system is a PV system with capacity of 10-40 Wp (peak Watts). By 2007, Grameen Shakti had installed 100,000 solar home systems to power lights, motors, pumps, televisions, mobile phones, and computers in Bangladesh [1].
SELCO of India is an organization successfully installing solar home and business systems to provide electricity for lighting, SWHs, solar inverter systems (for use in communications and computing), and small business appliances. Since 1995, SELCO has been providing solar energy solutions to underserved households and businesses in India, based on the ideas that poor people can afford sustainable technologies, poor people can maintain sustainable technologies, and social ventures can be run as commercial entities [1].
Community solar systems. Community solar PV systems are commonly used for pumping water for drinking and irrigation. The solar panel may vary between 130 Wp and 40 kWp. The Promethean Power project promotes a version of community solar systems using concentrated solar thermal power rather than PV. The system, which can be manufactured locally, concentrates solar thermal energy to heat a fluid refrigerant. The solar thermal system is combined with a unique microscale generator adapted and scaled to suit the needs of underserved communities. The heated fluid expands through a rotary vane turbine (an automobile powersteering pump) to make mechanical energy that spins a generator (an automobile alternator). Massachusetts Institute of Technology (MIT) is installing the systems in Lesotho in Africa [1].
Wind Power
Like solar, the fuel source for wind power is free and unlimited. The use of wind turbines to generate electricity via a generator produces no on-site air pollution, although a small amount of emissions is produced during manufacture of the turbines. One life-cycle assessment found that off-grid wind turbines reduce greenhouse gas emissions by 93%, compared to off-grid diesel power generation systems [13]. An additional advantage is that wind turbines are simple mechanical systems that can be easily maintained and repaired [10].
Wind Challenges
One of the challenges associated with wind power is its intermittency. Researchers are developing solutions to this problem for off-grid wind systems. Short term, the electrical energy generated by the turbine can be stored in a battery [14]. Researchers have developed controllers that maximize capture of wind energy and avoid battery overcharge [15,16]. Other researchers have proposed a hybrid energy storage system that can provide uninterrupted power, according to simulations. In the hybrid storage system, a battery is used for short-term energy storage, and a water electrolysis hydrogen system is used for long-term energy storage due to hydrogen's high mass energy density and very low leakage [17].
Another challenge is selecting an appropriate location for the turbine, due to the highly localized nature of wind. Low-cost anemometers may help alleviate this problem, but time must be spent to collect a sufficient amount of data [10]. Areas particularly suited to wind power because of their typical high wind velocities include coastlines, high ground, and mountain passes [12]. Wind power does not need water, so it is suitable for dry areas.
A third challenge is the need for a tower, so that the turbine is at least 10 m above the nearest obstacle. The tower itself could cost more than the wind turbine. Researchers are examining towers made from bamboo and other common and low-cost materials [10].
Wind Success Stories
The cost of commercially available wind turbines is several thousand US dollars per kilowatt, which is out of reach for most rural residents of developing countries [10]. Low-cost wind turbines with timber blades have been demonstrated successfully in Nepal [18]. Low-technology wind turbine generators, which can be made by people with limited technical skills, no advanced machining equipment, low capital cost, and limited exotic materials, have also been demonstrated [10]. The low-technology wind turbine was constructed in a joint effort by the IEEE Power & Energy Society Community Solutions Initiative and the Puget Sound Professional Chapter of Engineers Without Borders, USA.
Clean Biomass
Biomass sources in rural areas include human excrement, animal manure, and agricultural wastes. Biomass can be burned directly to produce heat energy or electricity via a microturbine, or it can be degraded by anaerobic microbes to produce biogas. Although burning biomass in inefficient cookstoves contributes to illness via indoor air pollution, as described above, biomass can be burned using clean technologies or used to generate clean-burning biogas. Biogas, typically 60-70% methane, can be used directly in natural gas-powered appliances or burned to generate electricity via a microturbine and generator [19]. It is expected that microturbines powered by biogas might eventually be competitive with diesel engines for village-scale power applications, with relatively low maintenance costs, high reliability, long lifetimes, and low capital costs [1]. Fuel cells might ultimately prove able to generate power at village scales from biogas, at very high efficiencies [1].
Anaerobic processes that produce methane from waste solve 2 problems at once: waste and energy. Anaerobic processes provide some of the simplest and most practical methods for minimizing public health hazards from human and animal wastes. Pathogens such as schistosome eggs, hookworm, flat/tape worm, dysentery Bacillus, poliovirus, Salmonella, and Bacillus for paratyphoid are destroyed. A residence time of 14 days at >35°C in a small-scale system in a developing country can provide 99+% removal of pathogens, with the exception of roundworm [20,21].
In addition, the solid residue from anaerobic waste treatment processes is a valuable fertilizer, which is stabilized and almost odorless. This fertilizer is especially a benefit in developing countries, due to its potential to boost crop yields.
Biomass Challenges
Ideally, energy should be produced from biomass that is not edible and that cannot be grown in places where edible crops could be grown, so that competition between uses of crops for energy and food does not become an issue. Producing energy from wastes avoids this issue.
Because of limits on the amount of land accordingly available for growing plants that can be used for energy, bioenergy cannot be viewed globally as the sole replacement or substitute for fossil fuels, but rather as one element in a broader portfolio of renewable energy sources [1]. In rural locations in developing countries without current access to electricity, however, biomass can provide a transformative local power source.
Biomass Success Stories
An improved biomass cookstove designed by Prakti Design Lab to meet the cooking requirements of rural households is around 40% percent more fuel efficient than traditional cookstoves and emits 70-80% less smoke [1].
In Senegal, proliferating invasive aquatic plants are being transformed into combustible pellets that can be used for cooking, replacing wood and charcoal. By impacting lake water quality, the plants' proliferation caused an increase in waterborne diseases. The plants are also created problems for fishermen, by jamming their nets, and farmers, by reducing access to water for livestock. Local fishermen and farmers will be recruited as plant removers, and 20 additional local workers will be hired and trained to manage the pellet production process. Based on capacity production of the compaction machine (4,000 kg/week) and a local price of US$ 0.28/ kg, the pellets could generate income of about US$ 1,120/week for the local population [1].
Micro-Hydro
Micro-hydro systems use the natural flow of water to yield up to 100 kW output of electrical energy [22]. Simplicity, efficiency, longevity, reliability, and low maintenance costs make these systems attractive for rural development [23]. Like solar and wind, the fuel source for microhydro power is free, and the use of hydro-powered turbines to generate electricity produces no on-site air pollution.
Unlike large hydroelectric plants, micro-hydro systems do not require a dam and reservoir, which minimize their environmental damage. A portion of the river's flow is diverted to the micro-hydro intake. A settling tank may be used to allow silt to settle out of the water. A screen or bars screen out floating debris and fish. The water then flows through a channel, pipeline, or pressurized pipeline (penstock) to the powerhouse, which houses a turbine or waterwheel. The turbine turns a generator to produce electricity [22]. A variety of turbines may be used, including a Pelton wheel for high head, low-flow water supply, or a propeller-type turbine for low-head installations [24,25].
Micro-Hydro Challenges
Micro-hydro systems obviously are limited to locations with a stream or river. The flow volume must be sufficient to supply local energy needs. In addition, a sufficient quantity of falling water must be available, which usually means that hilly or mountainous sites are best. A drop of water elevation of at least 2 ft is required or the system may not be feasible; the water does not "fall" enough distance to produce enough head [22,24]. Another limitation is that the distance from the stream or river to the site in need of energy may be considerable [23].
The power produced may fluctuate depending on how much water is flowing in the stream or river and the velocity of flow [23]. Energy can be stored in batteries, so additional reserve energy available for time of low generation and/or high demand. However, because hydropower resources tend to be more seasonal than wind or solar resources, batteries may not be able to provide enough energy storage for summer or other seasons with severely limited water flow. Integrating the hydropower with a hybrid wind or solar system can help in areas where water flow is highly seasonal.
Micro-Hydro Success Stories
The main micro-hydro programs in developing countries are in mountainous regions, such as Nepal (around 2,000 installations, including both mechanical and electrical power generation) and other Himalayan countries [25]. In South America, micro-hydro programs are located in countries along the Andes, such as Peru and Bolivia. Smaller programs have been initiated in hilly areas of Sri Lanka, the Philippines, China, and elsewhere [25]. In a variety of locations, micro-hydro systems have been shown to increase employment opportunities in rural areas, which encourage young people to stay in the villages rather than drifting to the cities [25].
Maher et al. [26] describe the successful implementation of pico hydro (<5 kW) systems in two communities in Kenya. Costs for these systems were considerably less than comparable PV or auto battery systems. The systems were constructed locally using available materials and community labor.
Hybrid Systems
According to a report issued in 2010 by the International Energy Agency, UNDP, and United Nations Industrial Development Organization, combining solar, wind, biomass, and minihydro into an integrated/hybrid system supplying a mini-grid is probably the most promising approach to rural electrification [2]. A combination of technologies in an integrated system can promote reliability. A small backup generator may be operated on diesel, biogas, or biodiesel [5]. Hybrid village electrification systems have been implemented in various countries, including China, India, Ghana, South Africa, and Tanzania [1]. A number of studies have examined the feasibility of various kinds of hybrid off-grid systems: wind-diesel [14,27], wind-solar [28,30], wind-PV-diesel [31][32][33], hydro-PV-wind [34], wind-hydrogen [35], and solar-wind-biomass-hydro [36].
What Now? Next Steps
At the global level, a new development paradigm-a pro-poor global climate change agenda -should be embraced. National climate change adaptation and mitigation strategies should be directly linked with poverty reduction and sustainable development goals [1].
To ensure that every person in the world benefits from access to electricity and clean cooking facilities by 2030, the International Energy Agency, UNDP, and United Nations Industrial Development Organization estimate that investment of $36 billion per year will be required.
To meet the more ambitious target of achieving universal access to modern energy services by 2030, an additional cumulative investment of $756 billion, or $36 billion per year, is needed. Although this sounds like a large number, it represents only 0.06% of average annual global gross domestic product (GDP) over the period. The resulting increase in primary energy demand and CO 2 emissions would be modest: in 2030, global electricity generation would be 2.9% higher, and CO 2 emissions would be only 0.8% higher [2]. Given that the up-front cost of new energy technologies is prohibitively expensive for poor communities, targeted financing and incentives are needed so that low-income communities, households, and entrepreneurs can invest in new energy technologies.
Students and young entrepreneurs in collaboration with nongovernmental organizations (NGOs) have done some of the most innovative work in new low-cost sustainable energy applications. Such partnerships should be promoted. The World Bank's Development Marketplace Grants, for example, provide global recognition and seed funding for creative ideas, technologies, and services that matter for development, so that they may grow and replicate.
Summary
The problems of energy access, poverty, and climate change are intertwined in the developing world. The poor often lack access to energy at all or have access only to inefficient and unhealthy forms of energy. As the poor gain access to energy, their contribution to climate change will increase, unless they leapfrog to renewable energy technologies. Unfortunately, the poor are the most vulnerable to many impacts of climate change, including increased food insecurity and amplified health risks. Access to energy can reduce their vulnerability to climate change impacts.
Fortunately, increasing energy access, alleviating rural poverty, and reducing greenhouse gas emissions can all be complementary, their overlap defining an energy-poverty-climate nexus.
Solar, wind, biomass, and micro-hydro systems have all been used successfully in various locations to provide off-grid renewable power to rural areas. Each has advantages and drawbacks, depending on the particular location. Combining solar, wind, biomass, and minihydro into an integrated/hybrid system supplying a mini-grid is probably the most promising approach to rural electrification.
Providing universal access to modern energy services by 2030 would cost only 0.06% of average annual global GDP during the period. What else could be a more worthwhile investment? | 5,348 | 2016-03-30T00:00:00.000 | [
"Economics"
] |
Small-data global existence of solutions for the Pitaevskii model of superfluidity
We investigate a micro-scale model of superfluidity derived by Pitaevskii (1959 Sov. Phys. JETP 8 282–7) to describe the interacting dynamics between the superfluid and normal fluid phases of Helium-4. The model involves the nonlinear Schrödinger equation (NLS) and the Navier–Stokes equations, coupled to each other via a bidirectional nonlinear relaxation mechanism. Depending on the nature of the nonlinearity in the NLS, we prove global/almost global existence of solutions to this system in T2 —strong in wavefunction and velocity, and weak in density.
Introduction
Superfluids constitute a phase of matter that is achieved when certain substances are isobarically cooled, resulting in Bose-Einstein condensation.That Helium-4 (and also its isotope Helium-3) undergoes such a quantum mechanical phase transition was first experimentally discovered [Kap38,AM38] over 80 years ago and has been the subject of intense inquiry ever since.Despite this, a single theory that describes the phenomenon continues to elude us.
The general picture is that at non-zero temperatures, there is a mixture of two interacting phases: the normal fluid and the superfluid [PL11,Vin04,Vin06,BSS14,BDV01,BLR14].It is important to note that this is not like classical multiphase flow, where one can define a clear boundary between the two phases.Instead, some atoms are in the normal fluid phase, and some are in the superfluid phase, with both fluids occupying the entire volume.The normal fluid is well-modeled by the Navier-Stokes equations (NSE), while the description of the superfluid varies by the length scale that we are interested in (see [BBP14,Jay22] for a discussion).Briefly, the superfluid is described by the NSE at large scales [Hol01], a vortex model at intermediate scales [Sch78,Sch85,Sch88], and the nonlinear Schrödinger equation (NLS) at small scales [Kha69,Car96].The macro-scale, NSE-based description is a current topic of numerical research [VSBP19, RBL09, SRL11], and has also been rigorously analyzed [JT21].In this paper, we use the micro-scale, NLS-based model by Pitaevskii [Pit59], which has previously been considered in [JT22a,JT22b].
A missing piece of the physics puzzle here is the nature of the interaction mechanism.It is known that the interaction between the fluids is dissipative/retarding.Pitaevskii thus derived a micro-scale model that intertwines the NLS (for the superfluid) and the NSE (for the normal fluid).The coupling is nonlinear, bidirectional and transfers mass, momentum, and energy between the two fluids.For the combined system of both phases, the model respects the conservation of total mass and total momentum, while the total energy decreases in accordance with the dissipation.
The NLS, in its most popular form, is fundamentally a dispersive partial differential equation with a cubic nonlinearity that models systems with low-energy wave interactions, such as dipolar quantum gases [CMS08,Soh11].The well-posedness issues of NLS have been tackled in many situations [CKS + ], and its scattering solutions [Tao06,Dod16] have been of particular interest.The NLS can also be recast as a system of compressible Euler equations (referred to as quantum hydrodynamics or QHD) with an additional quantum pressure term [CDS12].This system is a special case of the more general Korteweg models, subject to much mathematical analysis.Hattori and Li [HL94] showed that the 2D QHD equations are locally well-posed for high-regularity data, and improved this to global well-posedness in the case of small data [HL96].Jüngel [JMR02] established local strong solutions to the QHD-Poisson system, formed by including a potential governed by the Poisson equation.The same model possesses local-in-time classical solutions in 1D when the data is highly regular [JL04].For initial conditions close to a stationary state, the solutions are globalin-time and converge exponentially fast to the stationary state.Blow-up criteria have also been derived for QHD [WG20,WG21].While the discussion so far has focused on strong solutions, there has also been rising interest in the weak formulation of QHD-like models.Antonelli and Marcati [AM09, AM12, AM15] introduced the novel fractional step method in the pursuit of finite-energy global weak solutions.The idea was to revert (from QHD) to NLS, which was easier to solve, and account for collision-induced momentum transfer via periodic updates to the wavefunction.In this process, the occurrence of quantum vortices could also be characterized by imposing irrotationality of the velocity field (away from vacuum regions).Using special test functions that permit better control of the quantum pressure term, Jüngel [Jün10] proved that the viscous QHD system admits weak solutions in 2D.For small values of viscosity, these solutions were global in time.The proof utilized a redefinition of the velocity that converts the hyperbolic continuity equation into a parabolic one, a technique that was pioneered by Bresch and Desjardins [BD04] for Korteweg systems in general.Vasseur and Yu [VY16b] expanded Jüngel's result to a wider class of test functions while adding some physically-motivated drag terms.Various forms of damping have appeared in the literature, primarily serving two different roles: (i) as an approximating scheme for both the compressible Navier-Stokes with degenerate viscosities [LX15,VY16a] as well as Korteweg-type systems [AS17, ACLS20, AS22], and (ii) as a means of proving global existence [Cha20] or relaxation to a steady state [BGLVV22,SYZ22].Most works involving Korteweg systems use the notion of κ-entropy that was first demonstrated in [BDZ15].Furthermore, even questions of non-uniqueness (and weak-strong uniqueness) of weak solutions have been addressed for the QHD-Poisson system with linear drag using convex integration [DFM15].
It is only at absolute zero temperature that superfluids can be well-approximated by the use of the NLS alone.For temperatures above zero and below about 2.17K, we have a mixture of both fluids.In this article, we consider Pitaevskii's model [Pit59] which couples the NLS and the NSE.The model was initially derived for a fully compressible normal fluid.While compressible fluids are more realistic in some scenarios, they are also much more challenging to both rigorously analyze and numerically simulate.[Fei04,Lio96a] contain several classical results on the compressible NSE.On the other hand, the incompressible NSE (no density equation) is arguably the most studied nonlinear partial differential equation in mathematics (see [Tem77,MB02,RRS16] for classical results).In this article, we approximate the normal fluid to be incompressible, but the density persists, varying from point to point in the flow domain.What results is an incompressible, inhomogeneous flow: compressible NSE appended with the condition of divergence-free velocity.This model of fluids was first investigated by Kazhikov for local weak solutions when the initial density is bounded from below [Kaz74], and vacuum states were allowed in an improvement by Kim [Kim87].Further advances for weak solutions were made by Simon [Sim90], who in particular analyzed their continuity at t = 0, and also proved the existence of global solutions in a less regular space.Meanwhile, Ladyzhenskaya and Solonnikov [LS78] presented the case for strong solutions: With the density bounded from below, it is possible to construct local (global) unique solutions in 3D (2D).Furthermore, if the data is small enough, one obtains global-in-time unique solutions.Results in the same spirit were proven by Danchin for small perturbations from the stationary state in critical Besov spaces [Dan03].He further established the inviscid limit of the incompressible inhomogeneous NSE in subcritical spaces [Dan06].The local existence theorem by Ladyzhenskaya and Solonnikov was shown to be valid for non-negative densities as long as the initial data satisfied a compatibility criterion [CK03].This work by Choe and Kim has since spurred on several other results that utilize such compatibility conditions on the initial data.
Given the immense interest in the NLS and NSE, the rigorous study of a coupled system should be a natural next step.Indeed, one such two-fluid model of superfluidity was analyzed by Antonelli and Marcati in [AM15].The superfluid was described by the NLS, and the normal fluid by the compressible NSE.This is similar to the system considered in this article, save for two key differences.Firstly, their model did not permit any mass transfer between the two fluids (which allows for global-in-time solutions).As we shall discuss, this is the biggest roadblock in Pitaevskii's model and essentially defines the strategy used.Secondly, the momentum transfer in their model is unidirectional and linear, affecting only the superfluid phase (as opposed to the bidirectional and nonlinear nature of the coupling in this work).
Thanks to the retarding interactions between the two phases, the NLS acquires a dissipative flavor and renders it parabolic.This lets us extract dissipative contributions to the energy estimates.To analyze the momentum equation of the NSE, we work with initial velocity in H 1 d .This yields appropriate regularity for the velocity, in order to adequately control the relaxation mechanism which contains quadratic terms in the velocity.Parting ways from [Kim87], we begin with an initial density field that is bounded from below.This is necessary since the continuity equation is unusual and is not a homogeneous transport equation.Our primary goal is to avoid the occurrence of zero or negative densities at any time.To this end, we must limit the effect of inhomogeneity, which is the relaxation mechanism that allows for mass and momentum transfer between the two fluids.As a serendipitous by-product of this non-zero density field, we also obtain control of x , which allows the use of compactness arguments to actually obtain strong continuity in time of the velocity field.
The crux of this work is to derive a priori estimates and carefully extract coercive terms that allow for norms to decay, while avoiding any derivatives on the density of the normal fluid.To engineer this decay, we include a linear drag term for the NSE.Additionally, we also present results for any polynomial-type nonlinearity in the NLS.We now mention the notation used in the article before describing the model and stating the results.
1.1.Notation.We denote by H s (T 2 ) the completion of C ∞ (T 2 ) under the Sobolev norm H s , while we use Ḣs (T 2 ) when referring to the homogeneous Sobolev spaces.Consider a 2D vector-valued function u ≡ (u 1 , u 2 ), where ) under the H s norm.The L 2 inner product, denoted by ⟨•, •⟩, is sesquilinear (the first argument is complex conjugated, indicated by an overbar) to accommodate the complex nature of the Schrödinger equation, i.e., ⟨ψ, φ⟩ := T 2 ψφ dx.Since the velocity and density are real-valued functions, we ignore the complex conjugation when they constitute the first argument of the inner product.
We use the subscript x to denote Banach spaces that are defined over T 2 .For instance, L p x := L p (T 2 ) and H s d,x := H s d (T 2 ).For spaces/norms over time, the subscript t denotes the time interval in consideration, such as L p t := L p [0,t] .The Bochner spaces L p (0, T ; X) and C([0, T ]; X) have their usual meanings, as L p and continuous maps (respectively) from [0, T ] to a Banach space X.
We also use the notation X ≲ Y and X ≳ Y to imply that there exists a positive constant C such that X ≤ CY and CX ≥ Y , respectively.When appropriate, the dependence of the constant on various parameters shall be denoted using a subscript as the article, C is used to denote a (possibly large) constant that depends on the system parameters listed in (2.4), while κ and ε are used to represent (small) positive numbers.The values of C, κ, and ε can vary across the different steps of calculations.
1.2.Organization of the paper.In Section 2, we present and discuss the mathematical model, along with statements of the main results.Several a priori estimates, at increasing levels of regularity, are derived in Section 3. The construction of the semi-Galerkin scheme and the renormalization of the density are discussed in Section 4.
Mathematical model and main results
The superfluid phase is described by a complex wavefunction, whose dynamics are governed by the nonlinear Schrödinger equation (NLS), while the normal fluid is modeled using the compressible Navier-Stokes equations (NSE).In all generality, the full set of equations can be found in [Pit59,Section 2].In what follows, we use a slightly simplified and modified version of the equations, arrived at by making the following assumptions.
(1) We consider a general power-law nonlinearity for the NLS.This is done by choosing the internal energy density of the system to be 2µ p+2 |ψ| p+2 , for 1 ≤ p < ∞ (see Remark 2.5).We also assume that the internal energy is independent of the density of the normal fluid.
(2) We work in the limit of a divergence-free normal fluid velocity.This means that the pressure is a Lagrange multiplier, rendering the equations of state and entropy unnecessary.Note that, due to the nature of the coupling between the two phases, the density of the normal fluid is not simply transported.(3) A linear drag term has been included in the momentum equation to account for the lack of coercive estimates for the velocity.(4) Planck's constant (h) and the mass of the Helium atom (m) have both been set to unity for simplicity.
We now state the equations used in this paper: Here, ψ is the wavefunction describing the superfluid phase, while ρ, u, and q are the density, velocity and pressure (respectively) of the normal fluid.The normal fluid has viscosity ν and drag coefficient α, while µ (positive constant) is the strength of the scattering interactions within the superfluid 1 .This scattering nonlinearity has an exponent p ∈ [1, ∞).Finally, λ is a positive constant that indicates the coupling strength between the two phases.The coupling is denoted by the nonlinear operator B.
The Schrödinger equation dictates the evolution of the wavefunction, generated via the action of the Hamiltonian (roughly, the energy) of the system.The coupling B resembles the relative kinetic energy 2 between the two phases.This is evident upon recalling that the quantum mechanical momentum operator (in the position basis) is −ih∇.The purpose of this coupling is to allow for mass/momentum transfer between the two phases as a means of relaxation or dissipation.
These equations are supplemented with the initial conditions We use periodic boundary conditions, i.e., we are working on the two-dimensional torus [0, 1] 2 .
2.1.Weak solutions and the existence theorems.Having stated the model, the notion of weak solutions to (NLS), (NSE), (CON), and (DIV) (with initial conditions (INI) and periodic boundary conditions), henceforth referred to as the Pitaevskii model, is as follows.
Definition 2.1 (Weak solutions 3 ).For a given time T > 0, a triplet (ψ, u, ρ) is called a weak solution to the Pitaevskii model if the following conditions hold.
2 There is also the nonlinear wavefunction term, so that the relaxation to equilibrium also depends on the potential energy of the superfluid.
Remark 2.2.We note that the last two terms in (NSE) are gradients, just like the pressure term, and thus vanish in the definition of the weak solution (since the test function is divergence-free).Henceforth, we absorb these two gradient terms into the pressure, relabeling the new pressure as q.
We are now ready to state our main results.
Theorem 2.3 (Global existence).Fix any p ∈ [1, 4), and let e. in T 2 .Then, there exist a global weak solution (ψ, u, ρ) to the Pitaevskii model such that the density is bounded between m f ∈ (0, m i ) and M f := M i + m i − m f , if the initial data satisfy the smallness criterion Also, the solution has the regularity for 1 ≤ r < ∞.Additionally, the solution also satisfies the energy equality 1 2 ρ(t)u(t) (2.8) For the case of higher-order nonlinearities, i.e., when p ≥ 4, we obtain "almost global" existence.
Theorem 2.4 (Almost global existence).In the case of p = 4, the solution to the Pitaevskii model has the same regularity properties as in Theorem 2.3, except that their existence is guaranteed on , where ε is the size of the (sufficiently small) initial data.
For p > 4, the existence time scales polynomially with the size of the data, as T ∼ ε − p p−4 .In both cases, these solutions also satisfy the energy equality on [0, T ].
While deriving the a priori estimates, we have to distinguish between the cases 1 ≤ p < 2, p = 2, 2 < p < 4, p = 4, and p > 4.This is due to the poor control we have on the superfluid mass.Given that we are on T 2 , and our equations do not preserve functions with vanishing mean, the L 2 norm becomes the limiting factor even in the decay of higher norms.In the case of the wavefunction, this corresponds to the mass of the superfluid.Similarly, for the velocity, we do not get coercive estimates from the viscosity term alone, at least at the level of the kinetic energy estimate.Thus, we introduce a linear drag term.
Remark 2.5.Since the self-interaction term in (NLS) involves a discontinuity due to the complex magnitude, evaluating the H 2 norm as in (3.51) requires p ≥ 1.In particular, points of superfluid vacuum (ψ = 0) may lead to problems.As an illustration, consider D 2 (|f | p f ) for a real-valued function f , which can be regularized as D 2 (f 2 + ε) p 2 f .Upon differentiation, the most problematic term is (f 2 + ε) p 2 −2 f 3 (Df ) 2 .To be able to handle this term in the limit ε → 0, at the points where f = 0, we require that 2 p 2 − 2 + 3 = p − 1 ≥ 0. This argument can be easily extended to a complex-valued function.
Remark 2.6.The regularity of the solutions seem to suggest that the wavefunction and velocity are strong solutions.Indeed this is true, as they are strongly continuous in their topologies.On the other hand, the density is truly a weak solution and is the reason for referring to the triplet as a weak solution.This low regularity of the density influences the nature of the calculations that are employed.
The proofs of both Theorems 2.3 and 2.4 follow from detailed a priori estimates, and a semi-Galerkin scheme to construct the solutions.The a priori estimates only differ slightly for various ranges of the values of p, as will be illustrated.The general approach to the problem is motivated by that of [Kim87], but we do not allow the density to vanish anywhere.This is because the presence of u in the nonlinear coupling means we are required to control it in L ∞ (T 2 ) to prevent the formation of vacuum (and regions of negative density).Beginning from the usual mass and energy estimates, we derive a hierarchy of several energies for the wavefunction and velocity.
2.2.Significance of the results.The holy grail of superfluid modeling is to find a unified description that works at all length scales, and rigorous validation of any proposed models is crucial to this process.The thrust of this paper is the analysis of Pitaevskii's description of superfluidity, the most important feature of which is to characterize the mass transfer between the two fluids.In the course of proving the main theorems, we quantify the conversion of superfluid into normal fluid (Lemma 3.1), confirming the interaction-induced relaxation mechanism.We establish the validity of the model in the limit t → ∞ even as the superfluid mass decreases (polynomially) quickly.The transition in the behavior of the solutions, from global to almost-global, as the self-interactions are increased in strength, is in accordance with the decreasing mass decay.However, the threshold p = 4 still begs for a physical explanation.Of the assumptions underlying our theorems, relaxing the demands of small data and positive normal fluid density would be important future advancements in the context of the Pitaevskii model.
The rigorous analysis of superfluid models is a fairly new topic, and we expect for this work to pave the way for further results in this direction.Some questions of interest, particularly of consequence to physicists and engineers, are the issues of stability and compressibility.For example, in [Pit59], Pitaevskii investigated the propagation of sound waves in superfluid Helium by studying the case when the superfluid has only small density gradients.It has to be noted that his derivation of the model accounted for the contributions to the internal energy of the system from both fluids.Thus, by utilizing appropriate self-interactions (for instance, non-local potentials, or including the normal fluid density), it would be important to test the model against experimental findings.A mathematical guarantee of the existence of solutions to the Pitaevskii model is essential to complement the efforts to numerically simulate such complicated systems [BSZ + 23].It is worth mentioning that a better understanding of superfluidity could be revolutionary to most modern experiments in physics (including the Large Hadron Collider [Leb94,RM18]), and also to the fields of quantum computing [HDT21], gravitational wave astronomy [SDLPS17], and dark matter [vKEE + 23].All of these use helium as a cryogen, often as a superfluid-normal fluid mixture due to the superfluid's excellent thermal conductivity [Vin04].
2.3.The strategy.The nonlinear coupling terms in (NLS) and (NSE) may be the most obvious differences between this model and other standard fluid dynamics models, but the source term in (CON) is the most troublesome.The backbone of our approach towards proving global existence is ensuring a positive lower bound for the density at all times.This involves a meticulous handling of the a priori estimates so as to obtain coercive terms that lead to global-in-time bounds.Throughout the calculations, we ensure that the density norms are only in Lebesgue spaces: ρ is not smooth enough to be differentiated (even weakly).Before we outline the strategy, we discuss some properties of the coupling operator B. Henceforth, we refer to the linear (in ψ) part of B as B L .Thus, (2.9) Proof.Both calculations follow using integration by parts.
(1) By (2.9) and incompressibility of u, we have (2) Similarly, In the last inequality, we used Hölder's and Young's inequalities to cancel the third term with the first two terms: x .□ Remark 2.8.Given that B provides a relaxation mechanism, it is tempting to treat it, or at least its linear part B L , as a dissipative second-order elliptic operator whose eigenfunctions can be used as a basis for the semi-Galerkin scheme.Even though B L is symmetric and has a non-negative real part, this cannot work since it has time-dependent coefficients, and so its eigenvalues and eigenfunctions also depend on time.Moreover, B L does not have a spectral gap at 0. Its eigenvalues are not known to be bounded from below by a positive number.
Thus, by integrating (CON) over T 2 , the advective term vanishes and using Lemma 2.7, we have This implies that the overall mass of the normal fluid does not decrease with time.Put differently, the coupling causes superfluid to be converted into normal fluid, on average.However, the RHS of (CON) need not be non-negative pointwise in T 2 .So it is not inconceivable that the density of the normal fluid may locally vanish, or even take negative values!To prevent physically unrealistic density fields, and because our estimates require a strictly positive density, we fix a positive lower bound for ρ.Based on this, we define our existence time T , so that ρ does not drop below the lower bound until time T .Our goal is to show that this lower bound can be maintained for arbitrarily long, provided we begin from sufficiently small data.
Definition 2.9 (Existence time).Start with an initial density field 0 Given 0 < m f < m i , we define the existence time for the solution as (2.11) A formal solution to the continuity equation can be written using the method of characteristics.Let X α (t) be the characteristic starting at α ∈ T 2 .To wit, the characteristic solves the differential equation (2.12) Here, u is the velocity of the normal fluid.So, along such characteristics, (2.13) From (2.11) and (2.13), it is clear that a sufficient condition to ensure the density is bounded from below by m f is 2λ for all T ≤ T * .This can in turn be ensured through the sufficiency So, we are looking to show that (2.15) − actually a stronger version of it − holds irrespective of T , so that we can conclude that the density is always greater than m f .This is achieved by selecting small enough data, and allows us to deduce the global existence of solutions.Since Bψ involves a second-order derivative, its L ∞ x boundedness leads us to high-regularity spaces.The momentum equation (NSE) is used to estimate ∥u∥ L 2 t H 2 x and ∥u∥ L 2 t H 1 x , which are useful in handling parts of ∥Bψ∥ L ∞ x .As a by-product of these calculations, we are also able to bound x , which plays a part in the compactness arguments for the strong time-continuity of u.The Schrödinger equation (NLS) is used to derive increasingly higher-order a priori estimates of ψ.In all these calculations, we work with density that is only in L ∞ x .
A priori estimates
Throughout this section, we derive the required a priori estimates, using formal calculations.We assume the wavefunction and velocity are smooth functions and that the density is bounded from below by m f > 0 in [0, T ].Here, T is any time less than the local existence time T * , and is extended to global existence in Section 3.5.Proof.Multiplying (NLS) by ψ, taking the real part, and integrating over T 2 gives 1 2 The Laplacian term on the RHS of (NLS) vanishes using integration by parts.By Lemma 2.7, the second term in (3.1) is bounded from below by the L p+2 x norm, so we get 1 2 Since we are in a domain of unit volume, Hölder's inequality leads to d dt It is now easy to conclude that the mass of superfluid (using the quantum mechanical interpretation of the wavefunction) decays algebraically in time.Namely, where x is the initial mass of the superfluid.□ 3.2.Energy estimate.In this subsection (Section 3.2), we derive the governing equations for the energy In Section 3.3, we work with a higher-order energy X(t), combining it with E(t) in Section 3.3.3.We begin by acting with the gradient operator on (NLS), multiplying by ∇ ψ, and taking the real part.This gives Integrating over T 2 , we observe that the first term on the RHS vanishes upon integration by parts due to the periodic boundary conditions.The second term on the RHS is similarly integrated by parts to yield 1 2 Now, we rewrite the first term on the RHS by expressing the Laplacian in terms of the operator B, giving us a dissipative contribution to the energy estimate.Namely, |ψ| p Re( ψBψ). (3.7) We also have to account for the potential (self-interaction) energy of the wavefunction.To obtain this, we multiply (NLS) by 2 ψ and take the real part to obtain Multiplying the above equation with µ|ψ| p and integrating over T 2 leads to 2µ p + 2 The terms on the RHS are canceled once we include the energy of the normal fluid.We first rewrite (NSE) in the non-conservative form, and apply the Leray projector (see Remark 2.2) to get Here, P is the Leray projector, which projects a Hilbert space into its divergence-free subspace, thus removing any purely gradient terms.We also apply the Leray projector to (NSE) to obtain Taking the inner product of both (NSE') and (NSE-L) with u, using incompressibility, and adding them, we arrive at the energy equation for the normal fluid, 1 2 Therefore, by adding (3.9) and (3.10), we obtain the energy equation Thus, the energy is bounded from above as with denoting the initial energy of the system.Next, we wish to show that the energy actually decays algebraically in time, under a certain smallness condition on the initial data.First, note that where we used an argument similar to the one from the proof of Lemma 2.7 to get the last inequality.We now use (2.9) to see that We then bound the first term on the RHS using Hölder inequality and Gagliardo-Nirenberg (GN) interpolation as For the second term in (3.15), we interpolate the L 3 x norm, while also applying the Hölder, Poincaré, and Young inequalities, as well as the GN interpolation inequality, to get (3.17) For sufficiently small values of κ and E 0 , the RHS of (3.17) can be absorbed into the LHS of (3.15).We also use the Poincaré inequality to convert the last term on the LHS of (3.15) into a coercive term for the internal energy term 2µ p+2 ∥ψ∥ p+2 L p+2 x in E(t).To this end, we observe that (3.18) In the last inequality, we interpolated between the L p+2 x and L 2 x norms, which may be done when p > 2. By choosing κ sufficiently small, we can absorb the second term on the RHS into the LHS.For p ≤ 2, we can simply replace ∥ψ∥ p+2 While we have the required coercive terms on the LHS, we cannot yet obtain a decay estimate for E(t), since the second term on the RHS is out of reach using E only.In order to control it, we set up an analogous inequality for a higher-order energy.
3.3.Higher-order energy estimate.In this subsection, we obtain further bounds for ψ and u, this time with one more derivative than the energy E.
Once again, the first term on the RHS of (NLS) vanishes due to the boundary conditions.We now estimate the terms on the RHS of (3.20).For the first term, x , which gives a dissipative term for ψ.For the term I 4 , we again integrate by parts, followed by Hölder's inequality to obtain x .
Thus, (3.20) becomes (3.21) The first of these terms is bounded as using the Poincaré and GN interpolation inequalities.We have also applied Young's inequality to extract out dissipative terms in the last step.We again use κ to denote a small number whose value shall be fixed later on, and C κ is a constant whose value depends on κ and the system parameters.
Similarly, for the second term on the RHS of (3.21), we have Finally, we apply the Sobolev embedding and Poincaré inequalities to bound I 7 .This leads to Combining all these inequalities into (3.21)results in where we have absorbed κ D 3 ψ 2 L 2 x into the LHS with a sufficiently small κ.
3.3.2.
The Navier-Stokes equations.We shall now derive a higher order estimate for the velocity field, which shall be combined with (3.25).Starting with (NSE'), we first multiply it by ∂ t u and integrate over the domain to obtain , we control the RHS.For the first term, x .In going to the last inequality, we used the GN interpolation and Poincaré inequalities.Finally, Young's inequality lets us extract the required dissipative term.For the second integral in (3.26), where the Bψ term is handled via the GN interpolation and Young's inequalities.In the third integral in (3.26), x ∥ψ∥ 2 L 6 x ∥Bψ∥ 2 x , where the term Bψ is handled just like in I 9 .Finally, for the last term in (3.26), Re(ψBψ)|u| 2 . (3.27) We estimate the second term on the RHS of (3.27) using the Hölder and GN interpolation inequalities.This gives Similarly, for the third term in (3.27), αλ Substituting the above estimates into (3.26),we arrive at where C κ depends on κ and the system parameters.
So far, we have obtained equations for ∥∇u∥ L 2 x and ∥∆ψ∥ L 2 x , while including the higher-order dissipation corresponding to the wavefunction, ∥∇(Bψ)∥ 2 x .What remains is to consider the higher-order velocity dissipation ∥∆u∥ 2 L 2 x .To this end, we multiply (NSE') by −θ∆u, with θ to be determined, and integrate over the domain.This gives x with a small coefficient, so it can be absorbed into the LHS.Thus, we have The second integral is manipulated just as I 8 and yields x .The bound for the integral I 14 follows from the GN interpolation, Poincaré, and Young inequalities, as x .In a similar manner, we have Finally, for the last integral in (3.29), Thus, (3.29) becomes x . (3.30) We now add (3.25), (3.28), and (3.30).We also observe that where the last three terms on the RHS are the same as I 5 , I 6 , and I 7 in (3.21).We bound them just as in (3.22)-(3.24).Choosing θ sufficiently small, and subsequently κ also small enough, we absorb x and ∥∆u∥ 2 L 2 x on the RHS into the corresponding terms on the LHS.Finally, what remains is where we absorbed x with an appropriate choice of κ.This is the higher-order energy estimate.
The Grönwall inequality step.
Having derived the equations for the higher-order norms of u and ψ, and while accounting for the relevant dissipative terms, the goal now is to use a Grönwalltype argument.Lemma 3.2 (Algebraic decay rate for energies).The sum of the energy E(t) and the higher-order energy X(t) := ∥∆ψ(t)∥ 2 x decays algebraically in time as (1 + t) Proof.We begin by denoting x , so we can rewrite (3.31), after updating θ, κ, E 0 , and S 0 to be sufficiently small, as where Q 1 (X + E) is a strictly super-linear polynomial, while Q 2 (X + E) contains both linear and super-linear terms.To arrive at (3.32), we have also expanded the Sobolev norms as for the velocity, and ∥ψ∥ 2 for the wavefunction.Next, we add (3.11) and (3.32) to end up with We use the Poincaré inequality to rewrite Y in order to get decaying norms.Indeed, x ≳ X.Additionally, we also use the analysis in (3.14 x , which in turn can be downgraded to ∥∇ψ∥ 2 L 2 x using the Poincaré inequality.One can also represent ∥Bψ∥ 2 L 2 x on the RHS of (3.35) by where we have used the estimates (3.16) and (3.17), and the GN inequality.After all of the above manipulations, (3.35) now reads where β depends on the system parameters, and the polynomials Q 1 and Q2 are strictly superlinear.The first term on the RHS results from the estimates in (3.18).As for the second term on the RHS, we note that this can be absorbed into the LHS by tweaking S 0 .
For notational convenience, we write Z := X + E and use Q := Q 1 + Q2 to denote the strictly super-linear polynomial in the RHS of (3.36), leaving us with The Duhamel solution for Z(t) obeys We set the size of the initial data as Z(0) =: Z 0 ≤ ε.We need to use a bootstrap argument to show that Z(t) ≤ 3ε for t ∈ [0, T ].Specifically, we prove that the hypothesis Z(t) ≤ 4ε for t ≤ t 1 leads to the stronger conclusion Z(t) ≤ 3ε for t ≤ t 1 , where t 1 ∈ [0, T ].To this end, we estimate each integral on the RHS of (3.38).The first integral is split into two parts to take advantage of the exponential decay factor.Thus, The last inequality is a result of the exponential decay of the first term, compared to the algebraic decay of the second.The second integral in (3.38) is more straightforward, with Now we choose ε small enough (call it ε 0 ) so that the RHS ≤ ε.Similarly, the contribution from the first integral term in (3.38) is made less than ε for all S 0 ≤ ε 0 < 1 small enough.This completes the bootstrap argument, and we can see that indeed Z(t) ≤ 3ε.For ε 0 sufficiently small, the linear dissipation in (3.37) dominates the nonlinearities, and we may write the equation as whose solution, following (3.37)-(3.39),obeys Returning to (3.35), we now absorb the last term on the RHS into the LHS, which is possible for small enough data since sup 0≤t≤T Z(t) ≤ 3ε 0 .Furthermore, in the regime of small data, the super-linear polynomial Q 1 can be dominated by the linear term on the RHS, which leaves us with Employing the bound for Z from (3.41) in the RHS of (3.42) and integrating over [0, T ], we estimate the dissipation as (3.43)The last inequality holds because S 0 < 1.Thus, we can achieve small values for the RHS of (3.43) by selecting appropriate Z 0 and S 0 .
Another useful estimate for the dissipative terms results from integrating (3.42) over the time interval [t, 2t], where t ≥ 1.This gives (3.44)This time-decaying bound on the dissipation is necessary to obtain a sharp control of the dynamics at large times.□
3.4.
The highest-order a priori estimate for ψ.From the previous analysis, we have obtained Bψ ∈ L 2 [0,T ] H 1 x .However, as pointed out in the discussion following Definition 2.9, we seek To this end, we would like to obtain an even higher order a priori estimate, only for ψ.Lemma 3.3 (Algebraic decay rate for highest-order norm of ψ).For S 0 , E 0 , and Z 0 small enough, and with s = 5 4 , the homogeneous Sobolev norm ∥ψ(t)∥ Ḣ2s x decays algebraically in time as (1+t) x is sufficiently small, the higher-order dissipation ∥ψ∥ L 2 [0,T ]
Ḣ2s+1
x may be made as small as required, independent of the time T .
x .This inclusion is precisely what we need to control the term |u| 2 ψ in the coupling using the a priori estimates up to this point.
Proof.With s = 5 4 , we apply (−∆) s to (NLS), to get Just as in Sections 3.2 and 3.3.1,we multiply by (−∆) s ψ, and integrate the real part over T 2 .As a result, the second term on the LHS yields Re Using the self-adjoint property of the Laplacian allows us to conclude that the first term on the RHS of (3.45) vanishes, since For the second term on the RHS of (3.45), we have In the last expression of (3.46), we expand the operator B and use Hölder's inequality to arrive at (3.47) Rewriting the LHS in terms of the homogeneous Sobolev norms and the RHS in terms of the usual Sobolev norms, we have Since 2s − 1 = 3 2 , the algebra property of Sobolev norms is applicable.Using this, (3.4) and (3.41), we estimate the RHS of (3.48).The first term requires interpolation4 and yields where we have retained only the terms that decay the slowest.In arriving at the last inequality in (3.49), we use the fact that Z 0 < 1.For the second term in the RHS of (3.48), we similarly obtain While the H 3 2 x norm could have been interpolated between H 1 x and H 2 x , it does not provide an improved estimate since ∥u∥ x are both bounded by (3.43).In the last term of (3.48), in view of Remark 2.5, we have where the penultimate inequality is obtained using (3.4) and (3.41).Therefore, (3.48) becomes With the Poincaré inequality, we replace the dissipative term on the LHS with W (t) := ∥ψ(t)∥ 2
Ḣ2s
x .We employ calculations similar to (3.39) to estimate the integrals, i.e., splitting them over 0, t 2 and t 2 , t .We also use (3.43), in particular ∥u∥ 2 ≲ 1, to simplify the exponential factors outside the integrals.In all, we end up with for all t ∈ [0, T ].We simplify further by making use of (3.43) and (3.44) for ∥u∥ 2 x , respectively.This leads to We use (3.55) in (3.52) and integrate over [0, T ] to obtain the final dissipative estimate This shows that with small enough data one can achieve an arbitrarily small value (independent of T ) for this highest-order dissipation.
Similarly to (3.44), it is possible to also get a time-decaying estimate by integrating (3.52) over the time interval [t, 2t] for t ≥ 1.This leads to where we have used (3.44) and (3.55), and retained only the slowest decaying terms.□ The high-norm control in (3.56) and (3.57) is important because the inequalities can be translated into the desired bounds (on two fewer derivatives) for Bψ.Indeed, where for the last three terms, we replaced the homogeneous Sobolev norms by the larger inhomogeneous norms.Combining the analysis in (3.49)-(3.51)with (3.4), (3.43), (3.56), and (3.58), we get the sought-after dissipation bound The estimates in (3.59) and (3.60) are used to ensure that the density remains bounded from below.
3.5.Ensuring positive density.We now have all the a priori estimates to return to (2.15).For it to hold true, a sufficient condition is Depending on the value of p, we now divide the analysis into several cases: 1 ≤ p < 2, p = 2, 2 < p < 4, p = 4, and p > 4.
3.5.1.The case 1 ≤ p < 2. Owing to the Poincaré inequality and (3.43), we have and this bound holds for all p ≥ 1.For the first term of (3.61), we integrate (3.4), yielding since 2 p > 1.From (3.59), (3.62), and (3.63), we conclude that the condition in (3.61) can be achieved if is sufficiently small.Thus, the density satisfies m f ≤ ρ ≤ M i + m i − m f for all T > 0, as long as the initial data are small enough.
For p ≥ 2, the integral of the superfluid mass, i.e., ∥ψ(t)∥ 2 L 2 x , cannot be bounded uniformly in [0, T ].This is where the decaying estimates in (3.44) and (3.60) prove to be useful.
3.5.2.The case p = 2.We split the time integral in (3.61) over the ranges 0 ≤ t ≤ 1 (short-time) and t ≥ 1 (long-time).We start with the long-time estimate the LHS of (3.61) with p = 2.For the first term, we have Using the Poincaré inequality and (3.44) gives . This leads us to which is the long-time contribution (independent of N ) of the constraint in (3.61).It can be made as small as required with an appropriate choice of W 0 + Z 0 + S 0 .Finally, we verify the short-time control as well.The superfluid mass bound in (3.4) means that Similarly, using (3.43), we get From (3.59), (3.67), and (3.68), we have which can be made small enough to satisfy (3.61).This lets us conclude that the density is bounded from below uniformly in time, for the case p = 2. Thus, we have the necessary global bound.
3.5.3.The case 2 < p < 4. We begin, once again, with the long-time analysis, i.e, for t ≥ 1.From (3.4), we have Using the Poincaré inequality and (3.44), .71), we have Once again, the slowest decaying term is the dominant one.Therefore, we have (3.73)The sum converges (uniformly in N ) because p < 4. Hence, we obtain good long-time control of the LHS of (3.61) for 2 < p < 4.
What remains is to check that we also maintain short-time control.To this end, we have from (3.4), and from (3.43), which is the short-time control we are seeking.This implies global solutions, since the density is bounded from below uniformly in time.
3.5.4.The case p ≥ 4. The arguments for short-time control in Section 3.5.3remain valid even for p ≥ 4.However, the long-time estimates breaks down.Specifically, the geometric series in (3.73) diverges.We see that for , p > 4.
(3.76) Therefore, in this scenario, global-in-time estimates elude us due to the logarithmic/polynomial dependence on T .We can, however, guarantee almost global existence of solutions.Given a set of system parameters, we can ensure that ρ ≥ m f for any finite time T > 0 as long as we start from small enough initial data (depending on T ).In other words, if the size of the data is ε, then we have T ∼ e Having derived the required a priori estimates, we now establish the existence of a weak solution for a truncated form of the governing equations, and then pass to the limit.4.1.Constructing the semi-Galerkin scheme.The finite-dimensional wavefunction and velocity are constructed using eigenfunctions of the Laplacian and the Leray-projected Laplacian, respectively.4.1.1.The approximate wavefunction.Consider the negative Laplacian −∆ on the torus T 2 , with the domain D(−∆) = H 2 .It has a discrete set of non-negative and non-decreasing eigenvalues {β j }, and the corresponding eigenfunctions {b j } ∈ C ∞ (T 2 ) can be chosen to be orthonormal in L 2 x and orthogonal in H 1 x .We define the approximate wavefunction as for N ∈ N ∪ {0} and d N k (t) ∈ C.
4.1.2.The approximate velocity.We consider the Leray-projected Laplacian (or Stokes operator) A = −P∆ with the domain D(A) = L 2 d ∩ H 2 (see [RRS16, Chapter 2], for instance).The Stokes operator (like the Laplacian) has a discrete set of non-negative and non-decreasing eigenvalues {α j }, and the corresponding divergence-free, vector-valued eigenfunctions {a j } ∈ C ∞ (T 2 ) can be chosen to be orthonormal in L 2 d,x and orthogonal in H 1 x .We define the approximate velocity as for N ∈ N ∪ {0} and c N k (t) ∈ R. 4.2.The initial conditions.4.2.1.The initial wavefunction and initial velocity.We begin by defining P N (respectively, Q N ) to be the projections onto the space spanned by the first N + 1 eigenfunctions of A (respectively, −∆).Then, we truncate the initial conditions for the velocity and wavefunction accordingly: ), it is necessary to establish that the truncated initial conditions converge to the actual ones in the relevant norms.
ψ, and (2) The proof utilizes the equivalence of norms between Sobolev spaces and fractional powers of the negative Laplacian/Stokes operator (see Theorem 2.27 in [RRS16]).Given the regularity of ψ 0 and u 0 , we deduce the convergence of the approximate initial conditions by applying Lemma 4.1.
4.
3.1.The continuity equation.Having described the (approximate) initial conditions and the semi-Galerkin scheme, we now establish the existence of solutions to the "approximate" equations, starting with the continuity equation.It is given by where Just as in (2.15), we see that the constraint that fixes the local existence time Since the norms in (4.5) are bounded by the size of the initial data, the time T N is independent of N .Hence, we use T to denote the time of existence, with T arbitrarily large for 1 ≤ p < 4 and T bounded for p ≥ 4 (as specified in Theorem 2.4).We now establish the analogs of Lemmas 2.2 and 2.3 from [Kim87].These constitute the existence of a unique solution to (4.4) and a Picard iteration scheme for the same, respectively.
Proof.Consider the evolution equation for the characteristics of the flow, x N (0) = y N ∈ T 2 .
x N is well-defined.We now write the solution to (4.4) along characteristics as That (4.7) uniquely solves (4.4) can be verified using the property of the "inverse-characteristics" y(t, x).For any τ ∈ R, where the last equality is due to Euler's chain rule.□ Now, we consider a convergent sequence of velocities and wavefunctions that belong to the finitedimensional subspaces spanned by the truncated Galerkin scheme.Given such a convergent sequence, we show that the sequence of density fields satisfying (4.4) is also convergent, and this shall be used to complete a contraction mapping argument below.
x the unique solution to the system , where ρ N solves (4.4).
Proof.We begin by defining Ψ Given the convergence of y N n derived above, and because ρ N 0 ∈ C 1 x , the first term on the RHS vanishes.The second and third terms vanish on account of the following argument.Note that Ψ N n has its highest order term of the form ψ N n ∆ψ N n (second derivative), and so the assumed convergence of ψ N n in the C 0 x is finite, uniformly in n. □ 4.3.2.The Navier-Stokes equation.We now consider an "approximate momentum equation", composed of the approximate wavefunction and velocity fields defined by (4.1) and (4.2), respectively.Namely, .11) Recall that the incompressiblity condition is built-in, because the eigenfunction basis used to construct the velocity fields are divergence-free.Now, taking the L 2 inner product of (4.11) with a j (x) for 0 ≤ j ≤ N , we arrive at a system of equations for the coefficients describing the time-dependence of the approximate velocity fields, as where Since we have both lower and upper bounds on the density in the chosen interval of time, we can use Lemma 2.5 in [Kim87] to show that the matrix R N (t) is invertible.Therefore, we arrive at which is the desired evolution equation (written vectorially) for the coefficients c N j (t).
4.3.3.
The nonlinear Schrödinger equation.As in the previous section, we derive an evolution equation for the coefficients of the approximate wavefunction, by considering an "approximate NLS".Namely, Recall that B L = B − µ|ψ| p , i.e., the linear (in ψ) part of the coupling operator.Performing an L 2 inner product with b j (x), we get where Written vectorially, the evolution equation for the coefficients d N j (t) becomes where 4.4.Compactness arguments.We now extract convergent subsequences from the a priori estimates in Section 3. Beginning with the density, we know that ρ Moreover, from (4.4),
.18)
The second inequality is due to the (compact) embedding L 2 x ⊂ H −1 x for T 2 .All the terms in the last line are finite (uniformly in N ) by virtue of the a priori estimates.Therefore, using the Aubin-Lions-Simon lemma, we conclude the strong convergence of a subsequence of the density as Consider a relabeled subsequence ρ N that strongly converges to ρ in C([0, T ]; H −1 x ), so that (4.1) and (4.2) are also appropriately relabeled.For a.e.s, t ∈ [0, T ] and any ω ∈ H showing that ⟨ρ N (t), ω⟩ H −1 ×H 1 is uniformly continuous in [0, T ], uniformly in N due to (4.18).Due to the embedding H 1 x ⊂ L r x for all 1 ≤ r < ∞, we conclude, using the Arzela-Ascoli theorem, that ρ N is relatively compact in C w ([0, T ]; L r x ).We move on to the velocity.Based on the a priori estimates, we extract a subsequence of u N that weakly converges to u Applying the Lions-Magenes lemma (see [Tem77, Chapter 3]), we deduce that u ∈ C([0, T ]; H 1 d,x ).Based on the L ∞ t L ∞ x bound on the density, and the above strong convergences, it is easy to see that ρ N u N and ρ N u N ⊗ u N converge in C([0, T ]; L 2 x ) to ρu and ρu ⊗ u, respectively.Next, we consider the wavefunction.Again, we extract a subsequence that converges weakly to x ∩ L 2 [0,T ] H 7 2 x .From this and (NLS), we have ∂ t ψ ∈ L 2 [0,T ] H 3 2 x .Thus, the Lions-Magenes lemma yields ψ ∈ C([0, T ]; H x and H 1 d,x , respectively.For the momentum, we have where 1 r + 1 r ′ = 1 2 .Using the embedding H 1 x ⊂ L r ′ x to handle the velocity in the first term of the RHS, it is easy to see that the initial momentum converges in the L 2 x norm.The approximate solutions (ψ N , u N , ρ N ) are smooth enough to satisfy (2.1)-(2.3).The aforementioned compactness results allow us to pass to the limit of N → ∞ and arrive at the weak solutions (ψ, u, ρ).4.5.Renormalizing the density.At this point, we know that ρ N * − ⇀ ρ in L ∞ t L ∞ x .We wish to use the technique of renormalization to extend this to ρ N → ρ in C 0 t L r x , for 1 ≤ r < ∞.To achieve this, we will adapt a classical argument (see, for instance, Theorem 2.4 in [Lio96b]).We begin by defining a sequence of unit-mass mollifiers ζ h (x) = 1 h 2 ζ x h , where h will eventually be taken to 0. Next, for a given weak solution ρ ∈ L ∞ t L ∞ x , we mollify (CON) to obtain where g h := g * ζ h , Ψ := 2λ Re(ψBψ), and R h := u • ∇ρ h − (u • ∇ρ) h is a commutator.We multiply this by η ′ (ρ h ), for a C 1 function η : R → R.This yields The Sobolev embedding H 2 x ⊂ W 1,r 1 x for any r 1 ∈ [1, ∞) implies that u ∈ L 2 t W 1,r 1 x .From Lemma 2.3 in [Lio96b], we note that R h vanishes in L 2 t L r 1 x (and also in L ∞ t L 2 x ) as h → 0, by choosing r 1 > 2. Similarly, Ψ h converges to Ψ in C 0 t L 2 x .Finally, note that η ′ (ρ h ) is uniformly continuous since ρ (and ρ h ) take values in a compact subset of R. Therefore, using a test function σ, we may pass to the limit h → 0 in (4.22).In other words, if ρ is a weak solution of the continuity equation, then η(ρ) solves (in a weak sense) ∂ t η(ρ) + u • ∇η(ρ) = η ′ (ρ)Ψ.(4.23)This is the renormalized continuity equation.Taking the difference of (4.21) for h 1 , h 2 > 0, we write the analog of (4.22) for η(ρ h 1 − ρ h 2 ), with η(x) = x 2n where n ∈ N. Integrating over T 2 leads to x , it follows from the Sobolev embedding and Hölder's inequalities that Ψ = ψBψ ∈ L 1 t L r 1 x for any r 1 ∈ [1, ∞).Between this, the commutator estimate in Lemma 2.3 of [Lio96b], and the boundedness of ρ 0 , we find that all of the terms on the RHS of (4.24) vanish as h 1 , h 2 → 0, giving us a Cauchy sequence in C([0, T ]; L 2n x ).Hence, ρ h converges to ρ in C([0, T ]; L 2n x ).We have, so far, proved that our "original approximations" of the continuity equation ρ N converge in C w ([0, T ]; L r x ) to ρ, and that ρ also belongs to C([0, T ]; L 2n x ).To achieve what we set out to prove, i.e., that ρ N converges strongly in C([0, T ]; L r x ) to ρ, it remains to show that the L r x norms are continuous in time.It is sufficient to illustrate this for r = 2 (or n = 1), in order to deduce it for the other values of r.Explicitly, if there is a sequence of times t N → t, then we need ρ N (t N ) to converge in L 2 x to ρ(t).Returning to (4.4), we look at its renormalized version with η(x) = x 2 , and integrate over T 2 (and then from 0 to t N ) to get Since we know that ρ ∈ C([0, T ]; L 2 x ), we can do the same calculation with (CON), except over the time interval 0 to t.This yields x , we can use the strong convergence in (4.19) to handle the first term on the RHS.The second and third terms follow from simple Hölder's inequalities, and the strong convergence of ψ N of B N ψ N .Finally, the last term is integrable on [0, T ], so as t N → t, it vanishes.In summary, which, along with the weak-in-time continuity deduced earlier, implies strong convergence of ρ N to ρ in C 0 t L 2n x for all n ∈ N. Interpolating between Lebesgue norms extends this result to C 0 t L r x for all r ∈ [1, ∞).
4.6.The energy equality.The smooth approximations to the weak solutions satisfy an energy equation, given by (2.8), i.e., 1 2 ρ N (t)u N (t) for a.e.t ∈ [0, T ].From our choice of the initial conditions and their approximations (see Section 4.2), we can ensure that as N → ∞, the RHS converges to the initial energy E 0 defined in (3.13).Indeed, for the first term, Moreover, based on the results of Section 4.4, we can conclude that all the terms on the LHS of (4.26) converge strongly to the corresponding terms with the approximate solutions replaced by the weak solution.The first term on the LHS can be dealt with the same way as the first term on the RHS in (4.27), by simply including a sup t outside the absolute values.□ This completes the construction of the solutions.Together with the global/almost global estimates from Section 3, we can conclude the results of Theorems 2.3 and 2.4.
L p 2 +1 by ∥ψ∥ p+2 L 2 x since we are on a finite-size domain.Thus, irrespective of the value of p, (3.15) becomes
ε 4 .
for p = 4 and T ∼ ε − p p−4 for p > 4.This is the scaling expressed in Theorem 2.4.Existence of weak solutions (Proof of Theorems 2.3 and 2.4)
Lemma 4 .
1 (The projections Q N and P N are convergent).If ψ ∈ H r x and u ∈ H s d,x for any 0
(4. 6 )
Since u N ∈ C 0 [0,T ] C 1 x , there exists a unique solution x N (t, y N ) ∈ C 1 [0,T ] C 1 x .Owing to the incompressibility of the flow u N , it follows that det allowing us to conclude that the characteristics are C 1 diffeomorphisms and therefore, invertible.This means
4. 3 . 4 .
Fixed point argument for the coefficients.For a fixed N , a standard contraction mapping argument shows that (4.13) and (4.16) have unique solutions that are continuous in [0, T ].For a pair (u N n , ψ N n ), equivalently (c N n , d N n ), using Lemma 4.2, we can find a solution ρ N n .Owing to the smoothness (in space) of the eigenfunctions used in the approximate velocity and wavefunction, we conclude that u Therefore, performing an iteration on the triplet (c N n , d N n , ρ N n ) and using Lemma 4.3, we conclude that the sequence ρ N n converges to ρ N ∈ C 0 [0,T ] C 0 x .
x
).Additionally, we also haveB N ψ N C 0 t L 2 x − −−→ Bψ,due to the regularity of u and ψ.As for the initial conditions, by construction itself (Section 4.2.2),we have ρ N 0 L r x −→ ρ 0 for 1 ≤ r < ∞.Also, Lemma 4.1 states that ψ N 0 and u N 0 converge to ψ 0 and u 0 in H 5 2
ρ
Re ψBψ .Subtracting the last two equations, and taking the limit N → ∞, we observe that the first terms on the RHS cancel (recall from Section 4.2.2 that ρ N Thanks to the uniform boundedness of ψN B N ψ N in L 1 [0,T ] H 3 2 this is just the inverse of the characteristic, i.e., if the flow were reversed.Due to the flow being incompressible, we know that the matrix ∂y N n ∂x is invertible.Also, as shown in the proof of the previous lemma, ∂ ∂t y N n = −u N n • ∇ x y N n .This implies that the derivatives of y N n with respect to both space and time are bounded uniformly in n, t and x.Thus, by the Arzela-Ascoli theorem, we can extract a subsequence that converges N .Just as in (4.8), we can show that the solution to (4.9) is n→∞x N .Consider the map y → x N n (t, y) and define its inverse y N n (t, x); n→∞ y | 13,993 | 2023-05-21T00:00:00.000 | [
"Physics"
] |
Electrochemical and galvanic fabrication of a magnetoelectric composite sensor based on InP
A process chain for a magnetoelectric device based on porous InP will be presented using only chemical, electrochemical, photoelectrochemical, photochemical treatments and the galvanic deposition of metals in high-aspect-ratio structures. All relevant process steps starting with the formation of a self-ordered array of current-line oriented pores followed by the membrane fabrication and a post-etching step, as well as the galvanic metal filling of membrane structures are presented and discussed. The resistivity of a porous InP structure could be drastically increased and, thus, the piezoelectric performance of the porous InP structure. The developed galvanic Ni filling process is capable to homogeneously fill high aspect-ratio membranes.
Background
The aim of this work is to develop small and cheap magnetoelectric sensors that are capable to sense biomagnetic signals in the range of pico tesla with a high sensitivity. In principle, this can be achieved with multiferroic materials, such as Cr 2 O 3 [1], that show magnetoelectric behavior. The drawbacks of these materials are their small effect magnitude and a Curie temperature far below room temperature [1]. Magnetoelectric composites overcome these problems and are very promising candidates for biomagnetic sensing applications. Magnetoelectric composites consist of a piezoelectric and magnetostrictive component in various geometrical arrangements.
In this paper, a production chain for such a device will be presented using only chemical, electrochemical, photoelectrochemical, and photochemical etching of InP and galvanic deposition of metals. A 1-3 composite geometry is chosen because it allows for very high contact areas between the piezoelectric InP matrix (3-D) and the magnetostrictive wires (1-D). This is another prerequisite for a good magnetoelectric sensor performance. InP as a III-V compound semiconductor belongs to the cubic 43m crystal structure and, thus, in principle allows for strong piezoelectric behavior. Due to its cubic crystal structure, the only non-vanishing piezoelectric coefficient is the d 14 coefficient. Unfortunately, no intrinsic, i.e. insulating, InP can be produced, and thus, any piezoelectric voltage is short circuited by free charges inherent in the material. This problem has been overcome by etching an almost hexagonally close-packed array of current-line oriented pores (curro-pores) in <100> oriented InP wafers, leaving a porous structure with completely overlapping space charge regions (SCR), at least after further chemical etching. This allowed for increasing the piezoelectric response by a factor of 30 in comparison to bulk InP [2,3]. Galvanic filling of the pores with a magnetostrictive metal -such as Ni -allows for filling of high aspect-ratio geometries. This enables the use of a several hundred microns thick piezoelectric layer, resulting in a higher magnetoelectric voltage coefficient. Galvanic filling of a membrane structure is much simpler than galvanic pore filling and a well established technique [4][5][6]. From basic principles, it is clear that it is not possible to etch curro-pores through the complete InP wafer. Thus, the bulk back-side of the InP pore array is opened photoelectrochemically/photochemically after the curropore formation. The individual steps of the sensor device http://www.nanoscalereslett.com/content/7/1/379 production will be discussed as well as some first results of the piezoelectric and electric properties.
Methods
In this work, single-crystalline (100) InP wafers, doped with S at a doping concentration of 1.1 × 10 17 cm -3 and a resistivity of 0.019 cm are used. The InP wafers are double-side polished and epi-ready. Two different wafer thicknesses were used, 400 and 500 μm. The curro-pores were etched in an electrochemical double cell with a 6 wt.% aqueous HCl electrolyte at 20°C [7,8]. To achieve a homogenous pore nucleation, a voltage pulse was applied followed by a constant anodic potential.
In the second step, the membrane was produced by galvanostatic etching of crystallographically oriented pores (crysto-pores) in the wafer back-side until the previously etched curro-pore array is reached. During formation, this mesoporous layer was simultaneously dissolved photochemically by high power blue illumination. To avoid underetching and to have a good selectivity between the curro-pore array and the bulk back-side, blue light assisted photoelectrochemical etching is used. The etching was carried out in the same electrochemical double cell so that a misalignment due to a cell change is avoided. The etching is done in the same HCl-containing electrolyte at 20°C. The high power blue illumination is provided by an Enfis UNO Tag LED array (ENFIS LIMITED, Swansea, UK) with a mean wavelength of 470 nm. To control the whole membrane etching process, the voltage was monitored and the Fast Fourier Transform impedance spectroscopy data were recorded, which will not be discussed in this paper.
The third process step is the post-etching of the InP membrane. This step is necessary because the width of the SCR is much larger during the anodic pore formation than at open-circuit conditions [9]. Thus, only anodic pore etching does not result in a porous InP layer with completely overlapping SCR. Therefore, the InP membrane is post-etched in an HF : HNO 3 : EtOH : HAc containing electrolyte under a cathodic potential for 48 h at 20°C. The electrolyte was optimized to show isotropic etching behavior -especially along the full length of the high aspect-ratio pores -and to be self-limiting at the SCR surrounding each pore. A cathodic potential was chosen to artificially shrink the SCR around each pore and to expose more area unguarded by the SCR to the electrolyte for dissolution. As the post-etching electrolyte is self-limiting at the SCR, a strongly improved overlapping of the SCR is obtained under open-circuit conditions. The electrolyte is pumped through the complete membrane to ensure homogenous etching over the whole membrane thickness.
In the fourth step, the post-etched membrane shall be galvanically filled with a magnetostrictive material. To prevent possible leakage currents between piezoelectric matrix and magnetostrictive filler, a thin dielectric interlayer consisting of Al 2 O 3 has to be deposited by atomic layer deposition (ALD). Therefore, the galvanic filling process is developed in an anodically oxidized aluminum (AAO) membrane that is commercially available at Whatman GmbH (Dassel, Germany), with a nominal pore size of 200 nm, which is similar to the average pore size obtained in the here presented InP membranes. A plating base consisting of 25 nm of Ti and 450 nm of Au was sputtered on the back-side of the membrane. The AAO membranes are galvanically filled in a single cell. The working electrode is a Pt sheet, on which the backside of the membrane is mounted with InGa providing a good electrical contact. A Watt's type based Ni electrolyte adjusted to a pH value of 2 by H 2 SO 4 has been used. The electrolyte temperature was held constant at 35°C. The galvanic deposition was carried out under galvanostatic conditions at a current density of 17 mA/cm 2 .
The etched and galvanically filled nanostructures were investigated using a HELIOS D477 SEM (FEI Co., Hillsboro, OR, USA). The piezoelectric properties of the InP nanostructures were characterized by a double beam laser interferometer from aixACCT Systems GmbH (Aachen, Germany) and the electric properties by a four-point measurement with an Elypor-01 from ET&TE GmbH (Kiel, Germany).
Results and Discussion
The anodic pore formation is optimized to be selforganized with an almost perfectly hexagonally close packed structure as shown in Figure 1a. The pore walls have a mean thickness of around 190 nm. The typical aspect-ratio of the pores is between 3,000 and 3,700 after electrochemical pore formation. The pore shape is trapezoidal, as depicted in Figure 1a. As illustrated in Figure 1b, the curro-pores grow straight in <100> direction. Figure 2a-d shows that the previously etched curropore array can be opened by photoelectrochemical/ photochemical etching. This membrane fabrication process has several advantages. Firstly, the resulting membrane has a great surface homogeneity, as illustrated in Figure 2a and b. The second advantage is that no underetching near the O-ring occurs, as depicted in Figure 2d. The underetching is prevented by choosing photochemical dissolution of the mesoporous layer with blue light. Blue light is used, because it is absorbed very surface near and, thus, surface near dissolution of the mesoporous layer is highly enhanced compared to isotropic dissolution. The third big advantage is that the membrane thickness is freely adjustable from nearly wafer thickness (up to almost 500 μm) to less than 100 μm. This allows the aspect-ratio of the pore structures to be smaller than 1,000, which is especially important for a homogenous coating of pore walls with an Al 2 O 3 interlayer by ALD http://www.nanoscalereslett.com/content/7/1/379 as discussed later. The free adjustability of the membrane thickness is a direct consequence of the great surface homogeneity and the absence of underetching phenomena. Fourthly, the membrane fabrication process is semi self-limiting as the dissolution rate drastically decreases as soon as the curro-pore array is reached. The pore openings at the membrane back-side can be modified from cone-like to straight by adjusting the photocurrent due to the applied illumination. A small photocurrent, e.g., results into cone-like pore openings, as depicted in Figure 2b.
The pore structure after 48 h of post-etching is shown in Figure 3a measured displacement is found -as expected for piezoelectric, but not for ferroelectric materials. The d 14 component is measured to be around |60| pm/V, which is in the same order of magnitude as sputtered PZT thin films [10] and by a factor of 30 larger compared to bulk InP [2,3]. By luck, the maximum magnitude of the piezoelectric effect is in the <100> direction, which is exactly the optimal growth direction of the curro-pores in (100) oriented InP wafers.
For the magnetoelectric composite device, the galvanic filling with a magnetostrictive metal, such as Ni, is needed. The post-etched InP membranes can be (and already have been) coated with a thin Al 2 O 3 interlayer by ALD to prevent ohmic contacts between the piezoelectric and magnetostrictive component. Therefore, the galvanic deposition process of Ni can be developed and optimized in AAO membranes with similar pore dimensions because they are cheap and commercially easily available. Afterwards, this Ni deposition process is most probably directly applicable to Al 2 O 3 coated InP membranes, because the interface electrolyte/Al 2 O 3 is identical, and the pore dimensions are similar. The ALD deposition process of Al 2 O 3 is capable to coat membrane structures with an aspect-ratio larger than 1,500. Figure 4a shows the bottom part of a AAO membrane completely filled with Ni. One can see that the Ti/Au plating base adheres very well to the membrane. The Ni nanowires directly start growing from the plating base. The nanowires are solid and do not show any voids on optical inspection. Some of the Ni nanowires are broken or missing due to the sample cleavage. In Figure 4b, the middle part of the filled membrane is shown. The nanowires are still solid without any voids. They tend to be easily mechanically deformable, as indicated in Figure 4b by the twisty Ni nanowire in the middle of the picture. In order to show the possibility of filling the membrane completely with Ni, the membrane is overfilled resulting in an approximately 7 μm thick closed Ni film on top of the membrane surface. This layer is not caused by single Ni nanowires reaching the top surface faster than others, starting the formation of a solid Ni layer at the surface. It rather seems that all nanowires reach the surface at the same time because there are no empty pores visible, as shown in Figure 4c. The Ni nanowires remain solid even at the surface and do not exhibit any voids, as already seen in the previous parts of the membrane.
Conclusion
A self-ordered membrane structure in InP with strong piezoelectric behavior has been fabricated using only chemical, electrochemical, photoelectrochemical, and photochemical treatments. By cathodic post-etching of this pore structure, the resistivity could be drastically increased by a factor of 3,000 allowing for a strong piezoelectric effect being 30 times larger than for bulk InP. A photoelectrochemical/photochemical process for fabricating InP membranes with a freely adjustable thickness and high surface homogeneity is presented. The galvanic Ni filling process, which has been presented in this work, will be transferred to Al 2 O 3 coated InP membranes, and after galvanic Ni filling, the device is expected to show a good magnetoelectric performance. | 2,843.4 | 2012-07-09T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
An effective weighted vector median filter for impulse noise reduction based on minimizing the degree of aggregation
Impulse noise is regarded as an outlier in the local window of an image. To detect noise, many proposed methods are based on aggregated distance, including spatially weighted aggregated distance, n nearest neighbour distance, local density, and angle-weighted quaternion aggregated distance. However, these methods ignore the weight of each pixel or have limited adaptability. This study introduces the concept of degree of aggregation and pro-poses a weighting method to obtain the weight vector of the pixels by minimizing the degree of aggregation. The weight vector obtained gives larger components on the signal pixels than on the noisy pixels. Then it is fused with the aggregated distance to form a weighted aggregated distance that can reasonably characterise the noise and signal. The weighted aggregated distance, along with an adaptive segmentation method, can effectively detect the noise. To further enhance the effect of noise detection and removal, an adaptive selection strategy is incorporated to reduce the noise density in the local window. At last, noisy pixels detected are replaced with the weighted channel combination optimization values. The experimental results exhibit the validity of the proposed method by showing better performance in terms of both objective criteria and visual effects.
Because of the characteristics of impulse noise, both component-wise methods and vector based methods need to define a noise-like feature to evaluate each pixel and to conduct the image denoising, such as noise detection and noise removal. In the component-wise methods, the sum of scalar absolute difference is usually used as the noise-like feature. In vector based methods, the sum of vector absolute difference (Euclidean distance, city block distance, quaternion distance) is usually used as noise-like feature.
Analysing the above methods, it can be found that aggregated distance [40] has been widely used as a noise-like feature in impulse noise removal. The aggregated distance of a pixel is computed as the sum of distances from the pixel to its surrounding pixels. However, the distance to the noisy pixels does not have much meaning compared with the distance to the signal pixels in the colour space because impulse noise usually does not have valid information for the original image. Therefore, some improved versions of aggregation distance are proposed.
The rank-ordered absolute differences (ROAD) [41] and fast averaging peer group filter (FAPGF) [32] have used the local density of local pixel groups in the colour space as a noise-like feature. The ROAD value of a pixel is computed as the sum of the Euclidean distances between the pixel and the four closest (colour field) pixels in the local window. Meanwhile, FAPGF first defines a threshold distance of a colour space and then calculates the number of other pixels in the window that contains this same threshold range. This number is used as an independent variable to define a function that is regarded as a noise-like feature. Although the local density information can reflect the characteristics of the pixel (local density is higher, more signallike; local density is lower, more noise-like) and does reduce the bad influence of noisy pixels, but such information greatly depends on parameters and therefore cannot easily adapt to different noise levels.
Adaptive vector median filter(AVMF) [31] uses the Euclidean distance between the mean vector of several sample vectors which are close to the median vector, and the current colour vector, to compute its noise-like feature(reduced aggregation distance), and replaces the noisy pixels which's noise-like feature exceed a fixed value by the output of the standard VMF. Meanwhile, the fast peer group filter for VMF (FPGFVMF) [27], regards the current centre pixel in the local window as a signal pixel only if the number of surrounding pixels, which are close enough to the current centre pixel (in colour space), is sufficiently large. These two types of noise-like feature and switching mechanisms use fixed thresholds and parameters, and they heavily depend on parameters too.
The above improved versions of aggregation distance can be regarded as a kind of weighted aggregation distance, but they heavily depend on parameters. To reasonably weight and detect the noise, the concept of a degree of aggregation is proposed, based on which a weight vector is calculated by minimizing the degree of aggregation. The weight vector has larger component on the signal pixels than on noisy pixels. Then these appropriate weights of weight vector are used to form a weighted aggregated distance which is used as noise-like feature in this study. Then an adaptive data segmentation method for weighted aggregated distance sequence is designed to adaptively detect the noisy pixels. In the image recovery phase, we use the values optimised by channel combination to replace the noisy pixels. Experimental results show that the proposed method exhibits competitive results over other state-of-the-art colour image denoising methods.
The rest of this study is organised as follows. Section 2 discusses the noise analysis and detection method. Section 3 describes the noise removal method. Section 4 presents the experimental results. Finally, the conclusion is drawn in Section 5.
Degree of aggregation and weighted aggregated distance
In the local window (for the convenience of description, the window size is set to 3*3 hereinafter), the pixel group in the window is mapped to the nodes in the colour space. These nodes are connected by undirected edges, where the value of each edge is computed as the Euclidean distance of the two nodes associated with the edge. In this case, these nodes and edges can be used to form a complete graph. The adjacency matrix D of the graph is built as follows (where D i, j represents the colour distance between the ith pixel and the jth pixel): It is known that adjacency matrix D is a real symmetric matrix with nonnegative elements. The aggregated distance is defined as The aggregated distance can be perceived as the sum of the distance of the pixels from the neighbouring pixels. In the matrix D the aggregated distance of the i'th pixel is computed as the sum of elements of the i'th row or column. However, the distance (colour space) of the pixels from the noisy pixels in the window is not as important as the distance from the signal pixels in the window. We want the aggregated distance to only include the distance to the signal pixels and exclude the distance to the noisy pixels. But we don't learn in advance which pixels are noisy pixels and which are signal pixels. To reduce the influence of noisy pixels as much as possible, we propose the concept of degree of aggregation. At first, define the weight vector of the node W T : and its constraint: The initial value of the weight vector is 1∕n. The degree of aggregation X is defined as: The rightmost part of Equation (3) contains two terms, the first term is the weighted sum of the off-diagonal elements of the adjacency matrix D, and the second term is the weighted sum of the diagonal elements. If each node is weighted appropriately, X should be small. If we take the weight vector W as a variable and minimise X , the optimal weight vector will be obtained. However, since the diagonal element of adjacency matrix D is 0, only the first term is considered while the second term is ignored. If W is taken as a variable and X is minimised, the weight will freely flow to any component of W vector. Then X will get the theoretical minimum value of 0 while W will be non-unique and meaningless. But if the diagonal element of matrix D is reasonably modified to become a positive definite matrix, then the unique and optimal W will be easily obtained. The adjacency matrix can be reasonably modified in the two ways: 1. Modify each diagonal elements to be slightly larger than the sum of the elements in the same row or column.
2. Modify the diagonal element to be slightly larger than the negative of the minimum eigenvalue M v of D.
Δ is a very small positive value. The diagonal elements of matrix D will become some positive values and the modified positive definite adjacency matrix D 1 is obtained as follows: The degree of aggregation becomes: In the right part of Equation (7), if degree of aggregation is minimised, the first term enlarges the gap between the weight components, while the second term minimises the weight component. In other words, the penalty factor(diagonal element of matrix D 1 ) can also control the sensitivity of weight vector. If there is no noise in the local window, its degree of aggregation is small, and the presence of noise increases the local window's degree of aggregation. When X is minimised, the optimal weight vector obtained can weight more on the signal pixels than on the noisy pixels. Finally, the weight vector is merged into the aggregated distances to get the weighted aggregated distances: The following example illustrates the process of the weighting method. Consider the nine pixels in the 3*3 window that take the following values: Suppose that all the nine pixels are signal pixels. The initial weight vector is According to Equation (8), the initial weighted aggregated distances are: ( 0.89 1.12 1.12 1.12 1.34 1.12 1.34 1.28 1.28 ) According to Equation (7), the degrees of aggregation of different modifications (Equations 4 and 5) are 2.36 and 1.645. Afterwards, we minimise the degree of aggregation to obtain the optimal weight vector: ) and obtain that the new minimum aggregation agrees with 2.28 and 1.58. It can be found that the newly minimised degree of aggregation is only slightly smaller than the original degree of aggregation. After all, the optimal weighted aggregated distances are: Among them, the first 6 columns represent the value of the signal pixels, while the others represent the value of the noisy pixels. The initial weight vector is According to Equation (8), the initial weighted aggregated distances are: According to Equation (7), the degrees of aggregation are 119.851 and 95.142. After minimizing the degree of aggregation, the following optimal weight vector is obtained. and we obtain that the newly minimum aggregation agrees with 54.870 and 54.664, which are both smaller than the initial degree of the aggregation. Moreover, the optimal weighted aggregated distances are: ) It can be seen that the weighted aggregated distances used as a noise-like feature are more distinct between noise and signal than the unweighted aggregated distances. What should be mentioned is that the classic VMF can be regarded as a reduced version of our method. Considering the first setting of the diagonal elements (set to the aggregated distance), if the adjacency matrix D 1 ignores the off-diagonal elements (set to 0), then only the diagonal element D i,i (the unweighted aggregated distance of node i, refer to Equation 4) remains, as shown by matrix D 2 below: The same constraint minimum aggregation optimization is performed for matrix D 2 as a reduced version of the minimum aggregation optimization. By using the Lagrange multiplier method, the optimal solution vector W 1 is obtained as In Equation (10), the pixel with the minimum unweighted aggregated distance is selected by the VMF and has the largest component on the weight vector W 1 . In this case, the VMF only considers the aggregated distance (diagonal elements) and ignores the other non-diagonal elements of the adjacency matrix.
Noise detection
Noise detection is based on adaptive segmentation of noise-like feature. When pixels contaminated by impulse noise are present in a window, the complete graph becomes confusing. The aggregated distance of the noise nodes greatly increases along with that of remaining signal pixels, but the amplitude of the increase in the signals is much lower than that of the increase in the noise (shown in the example list in the Section 2.1). A segmentation method that adaptively divide the aggregated distance sequence into signal and noise parts can solve this problem. The weighted aggregated distance mentioned in Section 2.1 is used as the data sequence segmentation to find the noisy pixels. However, the segmentation of this sequence is not straightforward because the former portion of the sequence that represents the signals' noise-like features value is usually small and convergent, and the following portion of the sequence representing the noises' noise-like features is usually large and divergent. In addition, the weighted aggregated distance of the signal nodes has a natural cluster, while that of the noisy nodes has a natural divergence (which is a nature of noise). To address this problem, a segmentation method based on the sigmoid function may be effective. The sigmoid function equation can be expressed as follows: The curve of the sigmoid function is shown in Figure 1, where the red squares in the curve represent the noises, the green squares represent the signals, and the blue square represent the segmentation point P.
We want the signal part of the weighted aggregated distance sequence, which is small and convergent, to fit the high slope area (which is near the centre point and on the negative halfaxis of the sigmoid function) and the remaining noise part in the sequence, which is larger and divergent to fit the gentle slope area (which is far from the centre point and on the positive half-axis of the sigmoid function). The point P is considered as the segmentation point to determine the noise and signal.
Because the standard sigmoid function has most of the range of slopes in the domain 0 and 4, which is sufficient for model constrained optimization, a scale transformation is performed on the weighted aggregated distance sequence. The adaptive segmentation algorithm consists of the following four steps: (1) The weighted aggregated distance is sorted in an ascending order. (2) The weighted aggregated distance is transformed between 0 and 4. (3) The scaled weighted aggregated distance is mapped to the sigmoid function's value, f (x) in Equation (11). (4) Optimal point P is obtained by using the maximum interclass variance method for the mapped values.
In addition, the double-ended threshold strategy is used to supplement the adaptive segmentation method. If the weighted aggregated distance of the pixel is no more than a threshold T 1 , then the pixel is classified as a signal pixel. If the distance exceeds the threshold T 2 , then the pixel is classified as a noisy pixel.
Adaptive selection strategy
In the image recovery phase, the recovery strategy can greatly influence the recovery effect. For the recovery strategy, whether or not the recovered value is used should be discussed. If the restored part of the window is used as the original value, then the image cannot be easily recovered from heavy noise. Meanwhile, if the recovery value in the restored part of the window is used instead of the original value, then the details and edges of the image may be misinterpreted as noisy due to the uneven noise distribution. To balance these problems, [42] proposed a recovery strategy named course-to-fine detection. The courseto-fine detection strategy was used to detect the obvious noise and recover them in the course state, reducing the noise density and leaving the less obvious noise, which would be detected and recovered in fine state. However, the problem lies in assuming that coarse detection is accurate, as the quality of recovery cannot be easily guaranteed when the noise level is high. Using a less obvious noisy pixel as a replacement will negatively affect the fine detection. Consider the following processing window, where V s is the original value, V d is the recovered value, and V c is the centre point being processed. In the processing window, the recovered value usually has better properties. The probability for the recovered pixel to be a noisy pixel may be low. In a sense, it is equivalent to artificially reduce the noise density if the recovered value is used in the local window, which is extremely important in high noise. Consequently, we propose the following adaptive selection mechanism to adaptively replace some original values with recovered values.
where S is the Euclidean distance between the original value and the recovered value at the same image position, and H is a positive threshold.
Replacement with value of channel combination optimization
Most methods choose an "optimal" pixel as the output of the filter, but this solution has two drawbacks:
A possible situation is that the pixel does not have a neigh-
bour that is exactly equal to it on the original uncorrupted image. 2. The values on some channels of the pixels may be good, while those on the other channels may be quite different from the correct values.
In sum, the channels of a pixel cannot be easily evaluated as a whole. To combine the appropriate channels of pixels, we propose a strategy for combinatorial optimization as follows: where r, g, and b are the serial numbers of the R, G, and B channels, respectively. r, g, b ∈ -0, 1, 2, 3, 4, 5, 6, 7, 8P The combined scheme (r * , g * , b * ) is regarded as the optimal combination scheme and the colour vector P * t is considered as the ideal filter output which has the smallest weighted aggregated distance.
Noise model
To evaluate the restoration effects of different methods, an artificial impulse noise model capable of simulating natural impulse noise is needed. A widely used noise model is where r represents the noise probability, O i, j represents the uncontaminated pixel value at position (i,j) of the image, and N i, j represents the noise value contaminated at image (i,j). The simulation of the noise value can be divided into two steps. First, the three colour channels independently accept the corrosion of the noise value with probability r, and then, for each contaminated pixel, a correlation factor, such as = 0.5, is used to simulate channel correlation. That is, if a pixel is contaminated on at least one channel, each of the other uncontaminated channels receives impact corrosion with a probability of 0.5. Moreover, if the correlation factor = 0 is taken, then the corrosion of the three colour channels is irrelevant, and each channel independently receives the corrosion of impact noise with the probability r. We use four widely used objective criteria for evaluating restoration effects, namely, PSNR (peak signal to noise ratio), SSIM (structural similarity) [43], NCD (normalised colour distance), and FSIMc (feature similarity) [44]. The equation for these three objective criteria (PSNR, SSIM, NCD) is as follows:
Parameter selection
The parameters T 1 and T 2 used for impulse noise detection represent two kinds of deterministic judgments based on numerical amplitude of the weighted aggregate distance G w . When G w is less than T 1 , the pixel can be safely regarded as a signal pixel, and when G w is greater than T 2 , the pixel can be regarded as a noise pixel. T 1 is usually small and T 2 is usually large. In this study, T 1 and T 2 are set to 25 and 280. If G w is between T 1 , and T 2 , the noise will be detected by the adaptive detection mechanism above. As for the difference S between the original value and the recovered value at the same image position, the parameter H which plays an adjustment and threshold role in the adaptive selection strategy, controls the sensitivity of the difference S in the selection probability (Equation 12), and reduces the noise density of the local image blocks. The larger S is, the more unreliable the original pixel at that position is. When S is greater than H , the original pixel at the position is extremely unreliable, and is substituted for the restored pixel with probability 1. By our extensive experiments, we found that the performance of image denoising is not sensitive to the setting of parameter H . For convenience, H is set to 100.
Performance comparison
The proposed algorithm is compared with other effective image restoration algorithms, namely, AWQD [38] and L0TV [45], on the natural images, partly shown in Figure 2, taken from the CSIQ database (http://vision.okstate.edu/csiq). A random impulse noise with a density range from 0.1 to 0.5 is added to each image, and the recovery quality is compared by using four objective criteria (PSNR, SSIM, NCD, FSIMc). Tables 1-4 show the performance of the proposed method and the other methods on the Shroom image on different objective criteria and different noise levels (0.1 to 0.5). Tables 1-4 show that on the PSNR and NCD criteria, the proposed method outperforms the other algorithms at each noise level. When the noise density is as high as 0.5, although L0TV shows a higher score in terms of SSIM and FSIMc, L0TV shows a poor performance in texture protection compared with the proposed method and produces fake colour blocks. To reflect the robustness and universality of the proposed algorithm, the Tables 5-8 present the comprehensive performance of the aforementioned algorithms on the four objective criteria of test images withe r = 0.3. Figure 3 visually compares the performance of the aforementioned methods with r= 0.3. The proposed1 method shows better detail protection under low noise level, while the pro-posed2 method shows higher robustness under high noise level. AWQD demonstrates a good recovery performance at the 0.1 noise level and has well preserved both the colour and image structure; however, its recovery ability is greatly reduced when the noise gradually increases because the detection threshold set by this algorithm is difficult to control and the coarse-tofine detection and recovery may be difficult to achieve a good denoising effect. In the coarse detection stage of high-density noise, the image has a high noise density, and the recovery value of the coarse detection stage is very likely to be a less obvious noise. To make the original noisy pixels that remain in the fine stage become less obvious, the original signal pixel correspondingly becomes more anomalous, thereby leading to errors in fine detection and producing many structures where the original image does not exist. L0TV is less sensitive to noise density because it is based on data fidelity regularization and is robust at high noise levels. However, in the case of low or medium level noise density, the image is excessively smoothed, and L0TV lacks a sensitive detail protection mechanism, as seen in Figure 4 (the left image represents the L0TV result, while the right image represents the proposed method's result). Furthermore, when the local noise similarity is high, L0TV produces many fake colour blobs as shown in Figure 5. Table 9 lists the running time of different methods on denoising the test images, corrupted by impulse noise of r = 0.2. The test is performed on a desktop computer running 64bit Windows 7 with 3.2 GHz Intel Core (TM) i7-4790 CPU and 16 GB memory. The main computation cost of our proposed algorithm comes from finding the optimal solution of the constrained least quadratic form as Equation 7 (in this study, we use Goldfarb-Idnani active-set dual method (http://www. javaquant.net/papers/goldfarbidnani.pdf) to solve it). Although the running time of the proposed algorithm is longer than other methods except AWQD, it is still competitive when taking the denoising performance into consideration.
CONCLUSION
In this paper, the degree of aggregation is proposed, based on which we propose a weighting method to obtain an optimal weight vector by minimizing the degree of aggregation. This weight vector, which has larger components on the signal pixels than on the noisy pixels, is integrated with the aggregated distance to form a weighted aggregated distance. Subsequently an adaptive segmentation method based on the sigmoid function, is proposed to classify each pixel as a signal pixel or a noisy pixel. To reduce the local window noise density and avoid edge distortion, the adaptive selection strategy is also incorporated. Finally, the noisy pixels detected are replaced by the values of the channel combination optimization. Compared with other methods, experimental results show that the proposed algorithm can effectively protect the image details and ensure the quality of noise removal. However, one problem to this algorithm is the high computational complexity. It may speed up by a more rapid method of constrained quadratic minimization, and we remain it as the future works. (A.1)
A.1 Positive definite matrix proof of proposed1
Define S m : The modified matrix A is: According to Equation(A.2), we have: The quadratic form of matrix A is all composed of square terms, and Δ is greater than 0. When W is not equal to zero vector, it(W T * A * W ) must be a positive value. In conclusion, A is a positive definite matrix.
A.2 Positive definite matrix proof of proposed2
Since D is a real symmetric matrix, then we have, D = P T * Λ * P (A.5) while P is a orthogonal matrix, Λ is a diagonal matrix, and its diagonal elements are all the eigenvalues of matrix D.
The modified matrix B can be written as, while I is an identity matrix and M v is the minimum eigenvalue of matrix D. Because Δ > 0, all eigenvalues of matrix B are greater than 0, so matrix B is a positive definite matrix. | 5,901 | 2020-12-03T00:00:00.000 | [
"Computer Science"
] |
Unidirectional III-V microdisk lasers heterogeneously integrated on SOI
We demonstrate unidirectional bistability in microdisk lasers electrically pumped and heterogeneously integrated on SOI. The lasers operate in continuous wave regime at room temperature and are single mode. Integrating a passive distributed Bragg reflector (DBR) on the waveguide to which the microdisk is coupled feeds laser emission back into the laser cavity. This introduces an extra unidirectional gain and results in unidirectional emission of the laser, as demonstrated in simulations as well as in experiment. © 2013 Optical Society of America OCIS codes: (140.3460) Lasers; (130.0130) Integrated optics; (130.3120) Integrated optics devices; (230.0230) Optical devices; (230.1150) All-optical devices. References and links 1. M. Sorel, P. J. R. Laybourn, G. Giuliani, and S. Donati, “Unidirectional bistability in semiconductor waveguide ring lasers,” Appl. Phys. Lett. 80(17), 3051–3053 (2002). 2. M. T. Hill, H. J. S. Dorren, T. De Vries, X. J. M. Leijtens, J. H. Den Besten, B. Smalbrugge, Y. S. Oei, H. Binsma, G. D. Khoe, and M. K. Smit, “A fast low-power optical memory based on coupled micro-ring lasers,” Nature 432(7014), 206–209 (2004). 3. L. Liu, R. Kumar, K. Huybrechts, T. Spuesens, G. Roelkens, E.-J. Geluk, T. de Vries, P. Regreny, D. Van Thourhout, R. Baets, and G. Morthier, “An ultra-small, low-power, all-optical flip-flop memory on a silicon chip,” Nat. Photonics 4(3), 182–187 (2010). 4. D. Liang, M. Fiorentino, T. Okumura, H.-H. Chang, D. T. Spencer, Y.-H. Kuo, A. W. Fang, D. Dai, R. G. Beausoleil, and J. E. Bowers, “Electrically-pumped compact hybrid silicon microring lasers for optical interconnects,” Opt. Express 17(22), 20355–20364 (2009). 5. A. W. Fang, R. Jones, H. Park, O. Cohen, O. Raday, M. J. Paniccia, and J. E. Bowers, “Integrated AlGaInAssilicon evanescent race track laser and photodetector,” Opt. Express 15(5), 2315–2322 (2007). 6. J. P. Hohimer, G. A. Vawter, and D. C. Craft, “Unidirectional operation in a semiconductor ring diode laser,” Appl. Phys. Lett. 62(11), 1185–1187 (1993). 7. A. W. Fang, R. Jones, H. Park, O. Cohen, O. Raday, M. J. Paniccia, and J. E. Bowers, “Integrated AlGaInAssilicon evanescent race track laser and photodetector,” Opt. Express 15(5), 2315–2322 (2007). 8. C. J. Born, S. Yu, M. Sorel, and P. J. R. Laybourn, “Controllable and stable mode selection in a semiconductor ring laser by injection locking,” in Proceedings of the Conference on Lasers and Electro-Optics CLEO '03, (Baltimore, Maryland, 2003), pp. 1–3, paper CWK4. 9. A. F. J. Levi, R. E. Slusher, S. L. McCall, J. L. Glass, S. J. Pearton, and R. A. Logan, “Directional light coupling from microdisk lasers,” Appl. Phys. Lett. 62(6), 561–564 (1993). 10. J. U. Nöckel, A. D. Stone, and R. K. Chang, “Q spoiling and directionality in deformed ring cavities,” Opt. Lett. 19(21), 1693–1695 (1994). 11. Q. J. Wang, C. Yan, N. Yu, J. Unterhinninghofen, J. Wiersig, C. Pflügl, L. Diehl, T. Edamura, M. Yamanishi, H. Kan, and F. Capasso, “Whispering-gallery mode resonators for highly unidirectional laser action,” Proc. Natl. Acad. Sci. U.S.A. 107(52), 22407–22412 (2010). 12. D. Liang, S. Srinivasan, D. A. Fattal, M. Fiorentino, Z. Huang, D. T. Spencer, J. E. Bowers, and R. G. Beausoleil, “Teardrop reflector-assisted unidirectional hybrid silicon microring lasers,” IEEE Photon. Technol. Lett. 24(22), 1988–1990 (2012). #191964 $15.00 USD Received 10 Jun 2013; revised 1 Aug 2013; accepted 1 Aug 2013; published 8 Aug 2013 (C) 2013 OSA 12 August 2013 | Vol. 21, No. 16 | DOI:10.1364/OE.21.019339 | OPTICS EXPRESS 19339 13. F. Van Laere, G. Roelkens, M. Ayre, J. Schrauwen, D. Taillaert, D. Van Thourhout, T. F. Krauss, and R. Baets, “Compact and highly efficient grating couplers between optical fiber and nanophotonic waveguides,” J. Lightwave Technol. 25(1), 151–156 (2007). 14. M. Sorel, G. Giuliani, A. Scire, R. Miglierina, S. Donati, and P. J. R. Laybourn, “Operating regimes of GaAsAlGaAs semiconductor ring lasers: experiment and model,” IEEE J. Sel. Top. Quantum Electron. 39(10), 1187– 1195 (2003). 15. T. Numai, “Analysis of signal voltage in a semiconductor ring laser gyro,” IEEE J. Sel. Top. Quantum Electron. 36(10), 1161–1167 (2000). 16. E. J. D’Angelo, E. Izaguirre, G. B. Mindlin, G. Huyet, L. Gil, and J. R. Tredicce, “Spatiotemporal dynamics of lasers in the presence of an imperfect O(2) symmetry,” Phys. Rev. Lett. 68(25), 3702–3705 (1992). 17. M. Sargent 3rd, “Theory of a multimode quasiequilibrium semiconductor-laser,” Phys. Rev. A 48(1), 717–726 (1993). 18. J. Van Campenhout, “Thin-Film microlasers for the integration of electronic and photonic integrated circuits,” PhD thesis, Ghent University (2007). 19. M. Sorel, P. J. R. Laybourn, A. Scirè, S. Balle, G. Giuliani, R. Miglierina, and S. Donati, “Alternate oscillations in semiconductor ring lasers,” Opt. Lett. 27(22), 1992–1994 (2002). 20. T. Spuesens, F. Mandorlo, P. Rojo-Romeo, P. Regreny, N. Olivier, J.-M. Fedeli, and D. Van Thourhout, “Compact integration of optical sources and detectors on SOI for optical interconnects fabricated in a 200mm CMOS pilot line,” J. Lightwave Technol. 30(11), 1764–1770 (2012).
Introduction
Directional bistability, i.e. the ability of a laser to operate either in the clock-wise (CW) or counter-clock-wise (CCW) mode, is a unique characteristic of ring and disk lasers [1]. The bistability of microring lasers was used to demonstrate optical switching and logic applications [2,3]. This feature has also been observed in racetrack ring lasers and microring lasers built on a hybrid silicon platform [4,5]. Bistability is useful for some applications, but is undesirable for microdisk lasers used in optical interconnects. Indeed, different devices belonging to the same design can lase in different directions or switch from one lasing direction to the other depending on the injection current and the temperature. Stable unidirectional lasing ensures a higher efficiency of the laser and has been demonstrated using several approaches. "S-shape" ring resonator cavities are designed to introduce asymmetric coupling between the CW and the CCW modes [6]. In order to increase the net modal gain in one direction, the injection of an optical pulse from an external laser or light emitting diode (LED) has also been used [1,7,8]. These approaches require an external light source or introduce additional optical loss, which either degrades the laser performance or increases the complexity of the total system and its power consumption. Another approach is to break the rotational symmetry by using deformed optical microcavities to increase the directionality of emission and power collection efficiency [9]. However, all deformed cavities have the problem that the quality factor (Q factor) significantly decreases as the deformation increases [10]. Highly unidirectional laser action from whispering-gallery modes has been demonstrated with an elliptical-shape quantum cascade laser microcavity with a wavelengthsize notch at the boundary [11].
Another approach is to rely on an integrated optical reflector, such as a teardrop reflector, at one end of the bus waveguide. The reflector induces the laser to emit light toward the other end. Compared with external injection from another laser, this approach does not require additional power consumption or additional complexity and is free of mismatch in wavelength between the two lasers. Unidirectional operation of a hybrid silicon microring laser coupled to a waveguide with such a teardrop reflector is investigated in [12]. However, the hybrid microring laser in that work had a diameter of 50µm, and was operated at 30mA bias current. Under these experimental conditions, unidirectionality was qualitatively demonstrated although the laser does not necessarily operate in single-mode operation. In this paper we demonstrate unidirectional lasing of single-mode hybrid silicon microdisk lasers operating in continuous-wave at room temperature. We provide quantitative experimental results as well as a numerical analysis. The devices have a diameter of 7.5µm and are electrically pumped. Extra power consumption and added complexity to the design are not necessary to achieve unidirectionality as the system relies on a distributed Bragg reflector integrated in the silicon-on-insulator (SOI) circuit.
Device design and integration technology
We fabricated the unidirectional microdisk laser shown schematically in Fig. 1. A microdisk is etched into an InP-based film that is bonded onto a patterned SOI waveguide structure. The InP etch is not complete so that an electrical bottom contact can be defined. The disk edge is laterally aligned to an underlying SOI wire waveguide (WG). The laser emission under electrical pumping of the microdisk evanescently couples to an underlying Si WG. The light is coupled out of the chip using grating couplers, and is collected with optical fibers. A DBR structure can be found on one side of the WG.
Two optical levels can be identified in Fig. 1. The lowest level consists of narrow single mode SOI waveguides embedded in SiO 2 (with a width of 600nm and a height of 220nm). The upper level consists of a 583nm thick InP-based membrane. The microdisk lasers consist of a 483 nm thin disk cavity on top of a 100 nm thin InP bottom contact layer. The active layers consist of three compressively strained InAsP quantum wells (QWs) emitting around 1530nm and are surrounded by an n-doped layer on the bottom and a p-doped layer on the top to form the diode structure. A tunnel junction is implemented on the p-side such that an ntype contact layer can be used instead of a heavily doped p-type contact layer, in order to significantly reduce optical absorption. The tunnel junction also ensures uniform current injection over the disk. The two levels are separated by a thin transparent layer (50nm) of a low refractive index material (n = 1.54 for divinylsiloxane-bis-benzocyclobutene (DVS-BCB)) and 65nm of SiO 2 (n = 1.47), allowing evanescent coupling to the underlying waveguide. The designed microdisk lasers have a diameter of 7.5µm. A distributed Bragg reflector (DBR) implemented at one end of the bus waveguide forces the laser to emit light towards the other end. If the DBR structure is implemented in the CCW emission direction of the system, light emitted in the CCW mode is partially coupled back to the CW mode inside the microdisk laser. The power coupled back into the cavity leads to a photon density increase and unidirectional lasing in the desired direction. Compared with external injection from another laser this approach does not require additional power consumption and does not suffer from a potential mismatch in wavelength between the two lasers.
The SOI waveguides are fabricated in a CMOS fab using 193nm deep ultra-violet (DUV) lithography. The design of the passive SOI circuit for the considered device consists of a 600nm-wide waveguide, tapered down on both sides to a 500nm-wide waveguide, as depicted in Fig. 2. The 500nm-wide waveguide is on one side tapered up to a 2µm-wide waveguide, itself tapered up to a shallow-etch (etching depth: 70nm out of the 220nm) grating coupler (GC) used to collect the laser emission from the microdisk out of the chip in a single-mode optical fiber [13]. The 500nm-wide waveguide is on the other side tapered up to a 2µm-wide waveguide, where the DBR structure is defined. Two DBR configurations are implemented. They are both shallow-etched and their fill-factor is 50%, but they differ in period. One of them has a period of 290nm, while the other one has a period of 300nm. The waveguide after the DBRs is in both cases further tapered to be able to define another grating coupler, used to collect the laser emission from the microdisk in a single-mode optical fiber. The fiber grating couplers are optimized to demonstrate a maximum coupling efficiency at 1.55μm. Above threshold, the laser emission from the microdisk is coupled to the TE mode of the waveguide, and is simultaneously collected out of both grating couplers in optical fibers under a 10° angle in order to maximize the collection at 1.55μm. The DBR is in both cases designed to be 55µm away from the middle of the 600nm-wide waveguide section. The device fabrication relies on the adhesive bonding of the molecular-beam-epitaxygrown InP-based heterostructure onto the SOI with the use of the planarizing polymer DVS-BCB. Prior to bonding, 65nm of SiO 2 is deposited on the unprocessed InP die. The III-V die is positioned upside down on top of the SOI waveguide circuit, on which DVS-BCB has been spin-coated, and the resulting bonded structure is cured. After the bonding process, the InP substrate is wet etched until only the desired epitaxial structure remains. Alignment markers for subsequent processing are defined in the Si layer relative to the waveguide structures. This allows the accurate alignment of the microdisk lasers with respect to the waveguides. A nitride hard mask is deposited by PECVD on top of the epitaxy. This layer also protects the DVS-BCB and underlying silicon waveguides in the following processing steps. The pattern is transferred into the nitride mask using Reactive Ion Etching and the microdisks are patterned in the III-V membrane using Inductively Coupled Plasma (ICP) etching. The etching is monitored in order to leave 100nm of n-doped InP as a bottom contact layer.
The next step consists in defining the metallic bottom contact with lift-off. A thin layer of Ti (40nm) is first deposited to enhance the adhesion of the metal on top of the III-V material. 50nm of Platinum and a thick gold layer (100nm) are further deposited to finish the bottom contact definition. In order to separate two adjacent microdisk lasers, the InP bottom contact layer between them is etched with ICP. The sample is then planarized by spin-coating and curing undiluted DVS-BCB. The low refractive index of this material insures good optical confinement of the laser light in the microdisk laser. It also reduces the optical losses induced by the metallic top contact as the distance between the optical mode and the metal can be made large enough. The next steps consist in opening vias in the overcladding DVS-BCB layer in order to access the metallic bottom contact and to deposit the metallic top contact with lift-off (similar recipe as for the bottom contact). Finally, a layer of gold, 800nm thick, is deposited on the sample to define metallic pads that will be used to individually probe each microdisk laser. The thick contact also serves as a heat sink as it improves the heat dissipation under continuous-wave bias. From several cross-sections performed with Focused-Ion-Beam (FIB), we can conclude that the total bonding thickness is uniform everywhere on the sample (115nm above the waveguides). The DVS-BCB overcladding layer above the edge of the microdisk laser is 900nm thick. Figure 3 is a microscope picture of the sample where the processed microdisk lasers can be seen. The picture also illustrates the high potential for dense integration of microdisk lasers on a silicon photonic integrated circuit.
Simulation of the unidirectional behavior
An approximate analytical solution of the whispering gallery modes can be found by solving the Helmholtz equation in cylindrical coordinates. Because microdisk lasers do not have facets through which the light can be coupled out, an evanescent coupling towards a neighboring waveguide is assumed in this theoretical approach.
To explain the directional behavior of the microdisk lasers, we formulate the rate equations in terms of two counterpropagating whispering gallery modes with electric fields E + and E -. The spontaneous emission of the microdisk laser is implemented in two electric fields . .e n noise E RRn π = , with RR representing the spontaneous emission amplitude and n and ' n random numbers between 0 and 1. We find [14,15]: The model includes internal losses in the cavity through the photon-lifetime parameter p τ .
The parameter α is the linewidth enhancement factor that accounts for variations in refractive index due to carrier fluctuations in the semiconductor medium. G is the modal gain factor which will be described further and d c K K jK = + represents an explicit linear coupling rate between the two modes where d K is the dissipative coupling and c K the conservative coupling. This coupling term describes the effects of reflection at the end facets of the silicon waveguide and the coupling between the CW and the CCW modes due to sidewall roughness. For the carrier density rate equation, we find: I denotes all injected current and c τ is the carrier lifetime. The gain experienced in a semiconductor material decreases for high optical intensity. This is due to gain suppression. Gain suppression takes place even when the total carrier density N is constant and reflects the reduction of 'resonant carriers' due to carrier heating and spectral hole burning. To account for this effect, a gain suppression is added in the denominator of the expression of the modal gain: where s ε reflects the self-gain suppression and c ε the cross-gain suppression. The confinement factor Γ is due to the limited height of the active multi-quantum well. 0 N is the transparency carrier density, 0 g is the differential gain, and g υ is the group velocity in the microdisk laser. Strain in the quantum wells has a large impact on the band structure of the active material and can have beneficial effects on the gain, by reducing the transparency carrier density and/or improving the differential gain. The expression of the modal gain is linearized to: Calculations have shown that 2 c s ε ε = [16,17]. The cross-gain suppression c ε will therefore break the symmetry and enforce the unidirectional operation of the laser. The gain suppression is, however, only significant when the photon density is high. This means that at lower output powers, a bidirectional regime will be present. Table 1 summarizes the parameters implemented in the numerical solving of the above set of equations. The value for the linear coupling coefficient K is chosen so that the simulation matches the experimental results. After finding local extrema and stable solutions, the bifurcation diagram as a function of bias current is depicted in Fig. 4. We can distinguish between three different regimes. The first regime, just after threshold, is the bidirectional regime. As the optical power is low, nonlinear effects can be neglected and inter-modal coupling is the dominant effect, causing the two counterpropagating modes to be equally present. When the injection current is increased, we can have a bidirectional oscillating regime. The competition between linear coupling and non-linear gain suppression results in an oscillating behavior [14,19]. In this regime, the intensities of the two counterpropagating modes are modulated with harmonic sinusoidal oscillations and share the same oscillation frequency which lies in the GHz range. The modulation is out of phase on both outputs which means that the power in one mode is high when it is low in the other mode. The graph depicts the maximal and minimal values of the mode intensities. The last regime corresponds to the unidirectional operation where the initial conditions determine which of the two modes is dominant [1]. Non-linear gain suppression is now dominant and one mode suppresses the other, resulting in unidirectional behavior. The CW mode becomes dominant in Fig. 4(a), while the CCW mode becomes dominant in Fig. 4(b), depending on the initial conditions. The noise generated by the spontaneous emission, and represented by the fields 1 ( ) noise E t and 2 ( ) noise E t in Eqs. (3.1) and (3.2), will determine if the laser lases preferentially in the CW or the CCW mode. Figure 5 illustrates the simulated bifurcation diagram of the microdisk laser, calculated with the parameters from Table 1, except that no gain suppression has been taken into account this time. Without gain suppression in the microdisk cavity, the laser will not lase unidirectionally. A passive optical reflector is now added to the CCW emission direction of the system, feeding laser emission propagating in the CCW direction back into the laser cavity. To simulate this effect, a new term is added to the calculation of the field propagating in the CW direction.
The reflectivity from the bus waveguide is implemented so that the reflection induced by the DBR structure is simulated as . j r e Φ , where r represents the amplitude of the field reflection and Φ represents the phase of the reflected signal. Figure 6 illustrates the resulting bifurcation diagrams for different values of r and Φ . On Fig. 6(a), the phase is kept constant ( 2π Φ = ) and the parameter r is swept amongst the values (0; 0.01; 0.1; 1). For 0 r = , the typical bifurcation diagram plotted in Fig. 4 is obtained. Even for the lowest value of r , all the optical power is coupled to the CW mode of the microdisk laser and a unidirectional regime is present. As the value of r increases, the extinction ratio of the optical power in the CW mode and the optical power in the CCW mode increases. On Fig. 6(b), the phase is kept constant ( 2 π Φ = ) and the parameter r is swept amongst the same previous values. We observe that all the optical power is also coupled to the CW mode of the microdisk laser as soon as an external reflection is added to the simulation. The extinction ratio of the optical power in the CW mode and the optical power in the CCW mode increases as a function of r . The feedback as well as the extra phase introduced by the DBR structure does not have a large impact on the threshold of the microdisk laser. This can be theoretically demonstrated by calculating the threshold gain of a microdisk laser from the coupled rate equations for the complex field amplitudes of the clockwise and counter clockwise propagating laser modes. One can demonstrate that in the static case, for which the field amplitudes (and photon numbers) are constant in time, and for bias currents close to the threshold current, where gain suppression can be neglected: where p τ is the photon lifetime, 1 K and 2 K are the amplitudes of the linear coupling coefficients 1 K and 2 K of the clockwise and the counterclockwise propagating modes respectively, and 1 Φ and 2 Φ are their phases. The photon lifetime in the simulations is 4.17ps. The first term of the threshold gain is then in the order of 10 12 s −1 . On the other hand, the second term of the threshold gain is in the order of 10 9 to 10 10 s −1 in case of strong back reflection, which makes its influence, including the one from the phase factor, negligible. Simulations demonstrate that integrating a reflector on one side of the waveguide to which the microdisk is coupled feeds laser emission back into the laser cavity. This introduces an extra unidirectional gain and results in unidirectional emission of the laser.
From the above equations, we can conclude that there are two main effects in the coupling between the two modes. The cross-gain suppression prohibits the counterpropagating cavity mode to build up. This effect is necessary for unidirectional operation, but a low value of the linear coupling K ( d c K K jK = + ) also favors unidirectional operation. The dissipative coupling d K and the conservative coupling c K describe the effects of parasitic reflection due to sidewall roughness. We can simulate the effect of the linear coupling K on the bifurcation diagram of a microdisk laser. In Fig. 7, we plot the extinction ratio in dB of the optical powers P CW /P CCW as a function of the ratio of K expressed in Table 1 over the reflection from the bus waveguide r , when the microdisk is biased at 1mA. The phase Φ is kept constant while the amplitude of the field reflection r is swept. We demonstrate that the external reflection induced by the DBR along the CCW direction clearly influences the behavior of the microdisk laser. The linear dependence of the ratio of optical powers in dB to the ratio in dB of the linear coupling and the reflectivity indicates that the higher the external reflection from the bus waveguide, the higher the extinction ratio between the optical powers in the CW and the CCW modes. The phase of the external reflection, as well as the linear coupling K , do not have a significant impact on the slope of the linear dependence between the two ratios.
Experimental demonstration of the unidirectionality of microdisk lasers
The optical power-current (LI) characteristics of two 7.5μm-diameter lasers lasing in continuous-wave operation at room temperature are plotted in Fig. 8. The effective DBR length in both designs was chosen to be about 15μm, with 50 periods. The microdisk laser characterized in Fig. 8(a), named Laser A, is coupled to a waveguide where a DBR with a period of 300nm is implemented in the CCW emission direction of the system. The microdisk laser characterized in Fig. 8(b), named Laser B, is coupled to a waveguide where a DBR with a period of 290nm is implemented in the CW emission direction. Thermal roll-over appears in both devices for a bias current higher than 1.5mA. The spectra of the microdisk lasers coupled to a waveguide with a DBR with a period of 300nm and a DBR with a period of 290nm are plotted in Figs. 9(a) and 9(b) respectively for 1.2mA bias (continuous wave operation at room temperature). Single-mode operation in continuous wave regime at 1554.0nm and at 1555.8nm is demonstrated for both devices respectively. A side-mode suppression ratio of 18.6dB is measured for laser A on Fig. 9(a) and is higher than 25dB for laser B on Fig. 9(b). The free-spectral range is around 27nm for both devices. At the peak lasing wavelength of the microdisk laser, the extinction ratio between the optical powers coupled out of the waveguide on the side of the DBR structure and on the side without DBR is respectively 46.1dB and 39.9dB. Table 2 summarizes the wavelengths of each of the longitudinal modes, and their measured optical powers coupling out of the grating couplers on the side of the DBR and on the side without DBR. The passive characterization of the DBR is performed on a nominally identical passive SOI design covered in DVS-BCB. Light from a tunable laser is coupled into the chip with a 10-degree angled single-mode optical fiber through a grating coupler to a waveguide where a DBR with a period of either 290nm or 300nm is present. The optical power at the output of the waveguide, i.e. after the DBR structure, is collected out of another grating coupler in a single-mode optical fiber under a 10-degree angle. Figure 10 depicts the results of the characterization of the DBR structure with a period of 300nm [ Fig. 10(a)] and with a period of 290nm [ Fig. 10(b)]. In both cases, the transmission measurement is plotted together with the transmission through a straight SOI waveguide without DBR structure. Fig. 10. Transmission characteristics of a WG where the DBR has a period of 300nm (a), and of a WG where the DBR has a period of 290nm (b). The transmission characteristic of a straight WG of the same sample, acting as reference, is systematically plotted.
The peak efficiency of the grating couplers on this sample is located at 1580.5nm due to the BCB top cladding layer. On Fig. 10(a), an extinction ratio at the peak lasing wavelength of the microdisk laser (1554.0nm) of 38.1dB is measured between the reference transmission and the transmission through the DBR structure. In the case of Fig. 10(b), an extinction ratio at the peak lasing wavelength of the microdisk laser (1555.8nm) of 33.1dB is measured. Even though the reflection induced by the DBR structure is close to 100%, the low coupling efficiency between the microdisk laser and the silicon waveguide lowers the amount of reflection actually felt by the microdisk laser. Comparing the extinction ratios measured on Figs. 10(a) and 10(b) to the ones extracted from the spectral measurements from Figs. 9(a) and 9(b), we demonstrate an 8dB difference in lasing power between the CW and the CCW modes for laser A, and a 6.9dB difference for laser B.
One important requirement for the use of microdisk lasers in optical interconnects is to demonstrate that devices do not lase in different directions or switch from one lasing direction to the other depending on the injection current and the temperature. In [20], the LI curves of a microdisk laser at elevated temperatures remain smooth and unidirectional under pulsed driving conditions, in contrast to the case where continuous wave drive conditions were applied. Most likely this is because when the ambient temperature is changed, it affects both the silicon waveguide and the InP-based microdisk cavity, while the self heating effect in continuous wave mode only heats up the disk cavity. In this study, we investigate the influence of the DBR on the unidirectionality of the laser over a broad range of temperatures. The sample was heated by means of a Peltier element, under continuous drive conditions. The temperature was increased from 10°C to 35°C with the following steps 10°C, 15°C, 20°C, etc. As the DBR structure is designed to be 55µm away from the microdisk laser, this corresponds to approximately a 2 radian phase change for the reflected light. Figure 11 shows the optical power-current (LI) characteristic of laser B for the different ambient temperatures of the stage. Optical powers coupling out of the grating couplers in the CCW and the CW directions are simultaneously recorded. We demonstrate that due to the presence of the DBR on one side of the waveguide the LI curves remain unidirectional under continuous drive conditions, over a broad range of temperature. Lasing in the CCW direction is measured up to 35°C and a maximum output power of 3µW is measured in the fiber at 10°C (this corresponds to 15µW in the silicon waveguide). Fig. 11. LI curves of laser B under continuous wave operation at elevated temperatures. The slope efficiency decreases and the threshold current increases at higher temperatures. The power is the fiber coupled output power.
As expected, the threshold current gradually increases with increasing temperature as can be seen in Fig. 11, and the slope efficiency drops. The characteristic temperature of the microdisk laser can be extracted by fitting the natural log of the threshold current versus the ambient temperature. A value of 55K was found, which indicates that the laser is highly sensitive to temperature variations. Some optical power is collected out of the grating coupler above 2.5mA after the DBR in the CW direction of the system. The DBR structure is limited in bandwidth. Heat generated in the microdisk laser leads to a red-shift of its optical spectrum. Because of this red-shift, the longitudinal mode around 1580nm in the CW direction starts to fall out of the bandwidth of the DBR structure. Therefore, optical power is collected in the CW direction for wavelengths outside the bandwidth of the DBR.
Conclusion and discussion
In conclusion, we demonstrated and quantified stable unidirectional lasing in microdisk lasers heterogeneously integrated on SOI. Feedback from a passive distributed Bragg reflector is used to achieve stable unidirectional operation. This simple passive design does not add optical losses to the system and does not increase its power consumption. The implementation of this solution is the key to avoid the appearance of a 'memory' effect in microdisk lasers. It can be implemented to counteract processing effects, such as sidewall roughness, that threaten unidirectional operation of the lasers. Different devices belonging to the same design can now lase in the same direction with higher efficiency and without switching from one lasing direction to the other depending on the injection current and the temperature. This makes the use of microdisk lasers for optical interconnects applications very attractive. | 7,329 | 2013-08-12T00:00:00.000 | [
"Engineering",
"Physics"
] |
Novel functional insights from the Plasmodium falciparum sporozoite-specific proteome by probabilistic integration of 26 studies
Plasmodium species, the causative agent of malaria, have a complex life cycle involving two hosts. The sporozoite life stage is characterized by an extended phase in the mosquito salivary glands followed by free movement and rapid invasion of hepatocytes in the human host. This transmission stage has been the subject of many transcriptomics and proteomics studies and is also targeted by the most advanced malaria vaccine. We applied Bayesian data integration to determine which proteins are not only present in sporozoites but are also specific to that stage. Transcriptomic and proteomic Plasmodium data sets from 26 studies were weighted for how representative they are for sporozoites, based on a carefully assembled gold standard for Plasmodium falciparum (Pf) proteins known to be present or absent during the sporozoite life stage. Of 5418 Pf genes for which expression data were available at the RNA level or at the protein level, 1105 were identified as enriched in sporozoites and 90 specific to them. We show that Pf sporozoites are enriched for proteins involved in type II fatty acid synthesis in the apicoplast and GPI anchor synthesis, but otherwise appear metabolically relatively inactive, in the salivary glands of mosquitos. Newly annotated hypothetical sporozoite-specific and sporozoite-enriched proteins highlight sporozoite specific functions. They include PF3D7_0104100 that we identified to be homologous to the prominin family, which in human has been related to a quiescent state of cancer cells. We document high levels of genetic variability for sporozoite proteins, specifically for sporozoite-specific proteins that elicit antibodies in the human host. Nevertheless, we can identify nine relatively well-conserved sporozoite proteins that elicit antibodies and that together can serve as markers for previous exposure. Our understanding of sporozoite biology benefits from identifying key pathways that are enriched during this life stage. This work can guide studies of molecular mechanisms underlying sporozoite biology and potential well-conserved targets for marker and drug development. Author Summary When a person is bitten by an infectious malaria mosquito, sporozoites are injected into the skin with mosquito saliva. These sporozoites then travel to the liver, invade hepatocytes and multiply before the onset of the symptom-causing blood stage of malaria. By integrating published data, we contrast sporozoite protein expression with other life stages to filter out the unique features of sporozoites that help us understand this stage. We used a “guideline” that we derived from the literature on individual proteins so that we knew which proteins should be present or absent at the sporozoite stage, allowing us to weigh 26 data sets for their relevance to sporozoites. Among the newly discovered sporozoite-specific genes are candidates for fatty acid synthesis while others might play a role keeping the sporozoites in an inactive state in the mosquito salivary glands. Furthermore, we show that most sporozoite-specific proteins are genetically more variable than non-sporozoite proteins. We identify a set of conserved sporozoite proteins against which antibodies can serve as markers of recent exposure to sporozoites or that can serve as vaccine candidates. Our predictions of sporozoite-specific proteins and the assignment of previously unknown functions give new insights into the biology of this life stage.
Malaria is a mosquito transmittable disease resulting in over 220 million clinical cases and half a million 53 deaths annually. Most deaths are caused by Plasmodium falciparum (Pf), one of the five species of 54 Plasmodium that can infect humans. The infection begins with the deposition of liver-infective 55 sporozoite forms in the skin by blood-feeding mosquitoes. These sporozoites travel to the liver where 56 they invade, differentiate and multiply asymptomatically inside hepatocytes for approximately a week 57 before releasing red blood cell (RBC)-infective merozoites into the circulation. The subsequent asexual 58 multiplication, rupture and re-invasion of the parasites into circulating RBCs cause the symptoms 59 associated with malaria. The used data sets were examined for their correlations with each other (S1 Figure). By and large 128 most data sets show little correlation. We did leave the few correlated data sets in to keep the data 129 integration transparent and maximize the amount of included information. Oocyst-derived and 130 salivary gland sporozoites showed high correlations with each other, which led us to combining all 131 respective studies into the "sporozoite" data input and not make a distinction between those stages. 132 S1 Figure: Correlations of all integrated data sets with each other, hierarchically clustered.
139
Proteomic data were converted into unique peptide counts for each protein identified and 140 transcriptomic data were converted into expression percentiles for a total of 5668 P. falciparum gene 141 IDs. Proteomic and transcriptomic data was binned consistently for all data sets, with 0, 1, or >1 142 identified unique peptides, or into four bins, containing transcripts that are in the >80 percentile, >60 143 percentile, >40 percentile and <40 percentile, respectively. The data sets were then weighted 144 according to their ability to retrieve the gold standard proteins. Each bin in each data set was given a 145 log2 score according to equation (1), where B = present in bin, S = sporozoite specific and nonS = not 146 sporozoite specific.
147
(1) ( ( | )) = ( The Bayesian score for an individual protein is then the sum of the scores for the bins in which it 149 occurs (one bin per data set). As the expected number of sporozoite-specific proteins was unknown, 150 no prior was included, rather we used cutoffs based on the position of known sporozoite specific 151 proteins to define sporozoite specific proteins and sporozoite enriched proteins. 152
Overrepresentation of function categories in sporozoite proteins
153 GO terms were acquired from PlasmoDB and formatted into a .gmt file according to the format 154 specified by the GSEA server at the Broad Institute [14]. GSEAPreranked on the ranked list of 155 sporozoite-specific proteins was used with the conservative "preranked" option in the "classic mode", to the sporozoite stage (S1 Table). In order to optimally exploit those data to obtain sporozoite-230 enriched proteins we integrated them in a Bayesian manner (Methods). Integrating the data sets using 231 the sets of 31 positive and 39 negative gold standard proteins (S2 Table, S3 Table) produced a list of 232 all proteins in P. falciparum ranked according to their likelihood of being sporozoite-specific (S4 Table).
233
The score distribution of the negative and positive gold standard proteins varied depending on using 234 all available data sets, or proteomic or transcriptomic data separately ( Figure 1A and S2 Figure). The
235
Bayesian integration using only transcriptomic data sets resulted in a ranked list where 14 negative 236 gold standard genes scored higher than the lowest scoring positive gold standard gene (S2 Figure). This 237 overlap was lower when using only proteomic data sets (S2 Figure), Proteins were considered sporozoite-specific when ranking in the first quartile of the gold standard 256 protein list i.e. scoring above 12.27, which represents a factor of being at least 2 12.37 to 2 5200 more 257 likely to be specific to sporozoites than to any of the other stages. Do note that hereby we ignore the 258 prior probability of a protein being sporozoite specific at all. Although such a prior is standard in 259 Bayesian data integration, in this case a prior, e.g. of 1/5, is given the high cut-off for sporozoite specific 260 proteins that we used, not very relevant. As we did not consider STARP to be sporozoite specific (see 261 above), we considered proteins that scored higher than the second to lowest positive gold standard 262 protein (LIMP protein, score = -1.31), to be enriched in sporozoites. Finally, the abundance of unique 263 peptides by mass-spectrometry was assessed for each protein. Proteins were deemed present in 264 sporozoites when identified in two independent studies or with more than 1 unique peptide in at least 265 one study. Our analysis thus identified 90 sporozoite-specific proteins, 1105 sporozoite-enriched 266 proteins and 2736 that were present in sporozoites (S4 Table). Out of the 90 sporozoite-specific 267 proteins, 67 were not part of the positive gold standard list. We validated our predictions by 5-fold 268 cross validation (5-fold CV) by randomly skipping 1/5 th of the gold standard proteins from the data 269 integration and assessing their predicted sporozoite specificity based on the remaining data ( Fig 1B).
270
The high sensitivity and specificity indicated that novel sporozoite-specific proteins would also score 271 higher than non-sporozoite specific proteins. We also compared the ranking of the gold standard 272 proteins based on the integrated data with a ranking based on individual data sets of sporozoite RNAs 273 and proteins. The cross validation separated the gold standard proteins better than individual data 274 sets, supporting the integration of multiple data sets ( Fig 1B). Toxoplasma gondii, an apicomplexan related to P. falciparum that is present in the HHpred database.
289
The sporozoite specific protein with the highest score that was not part of the gold standard, The level of polymorphisms among P. falciparum strains is indicated for the separate regions as the average number of polymorphisms per nucleotide in the strains in PlasmoDB. PF3D7_0104100 has a high density of polymorphisms within P. falciparum strains that are concentrated in the second extracellular loop. Cysteines that are conserved among the homologs in Plasmodium species are indicated. The cysteines that are conserved also in human homologs are in bold. Note that the conserved cysteines occur in close proximity to each other, suggesting the formation of disulphide bonds. shorter TM region, putting the cysteine that is located in that TM region in the extracellular space. 311 The Toxoplasma protein was included because its sequence profile has significant sequence similarity 312 against both the human protein profile (E= 2e-20) and the Plasmodium protein profile (E= 3.4e-44), 313 while the similarity between the human and the Plasmodium protein is less significant (E=0.0001).
394
The width of the arrows is determined by the Bayesian score reflecting the level of over representation 395 of that enzyme in sporozoites, e.g. 16 for ACS2 and 8 for FabB/F (S4 Table). For PDH that consists of 396 three proteins, the width of the arrow was determined by the average of those three. Most of FASII 397 proteins are enriched in sporozoites, except PKII, FABZ and LipA. The scheme is a simplification of the 398 pathway as depicted by Shears et al.[22], to which ACS2 was added as it is highly enriched in sporozoites 399 and relevant for fatty acid synthesis.
402
In contrast to the paucity of upregulated processes, there is significant under representation (FDR =< 403 0.001) of processes linked to splicing, translation, translation elongation, folding of proteins as well as 404 proteolysis (S5 Table). These processes are primarily modulated by a number of sporozoite specific Table). Table S7. Induced antibody profiles 421 represent a blue print of immunogenic proteins in sporozoites, liver stages and early blood stages.
422
Here we focus on the set of sporozoite target proteins that may be used as markers of previous 423 sporozoite exposure or may act as potential targets for vaccines. Minimal sequence variation 424 between Pf strains would thereby strengthen the candidature of the proteins for epidemiological or 425 clinical applications, however antibody eliciting proteins including CSP, show relatively high levels of 426 polymorphisms among sequenced malaria strains (S4 Figure), and also sporozoite specific proteins 427 show high levels of polymorphisms (S5 Figure). The selection of proteins that could serve as markers 428 of exposure or vaccine candidates is a compromise between on the one hand protein sequence 429 conservation and on the other hand the frequency of volunteers with antibodies to that protein. As 430 the maximum level of sequence variation between candidate marker proteins we used 8 non-431 synonymous SNPs per kilobase, which is lower than the variation in for instance Pfs48/45, a highly 432 conserved gametocyte protein with 8.9 nonsynomymous SNP per kilobase. As the minimum number 433 of people in which a protein should elicit antibodies we chose six (out of the 38). Using a "greedy 434 search algorithm" (methods) we selected a set of nine proteins of which at least one elicited 435 antibodies per volunteer (Table 1).
436 were also applied to the indels. Numbers of called SNPs and indels were roughly similar for NF135 468 and NF166, reflecting their independent geographic origins. As expected, NF54 showed much lower 469 numbers of called SNPs and indels than the other strains, as it is the parental line from which 3D7 is 470 derived (S8 Table). As observed in Plasmodb, proteins in NF166 and NF135 showed high levels of 471 polymorphisms for antibody-binding proteins enriched in sporozoites ( Figure S6)
503
Despite translational repression, which is expected to reduce the correlation between mRNA and 504 protein levels, including the mRNA levels led to a better performance at the protein level that only 505 including proteomics data. Cross validation shows furthermore that the integrated list is better at 506 separating the gold standard positive and negative data sets from each other than the individual data 507 sets.
508
In this study, we did not separate datasets derived from oocyst sporozoites (the earliest form of this 509 stage) and salivary gland sporozoites (the mature form). There may have been subtle difference in 510 protein expression between sporozoites in the two differing host environments (midgut versus salivary 511 gland) that were not detected. However, oocyst-sporozoite data represent a minority of the combined 512 data set (2/26) and are highly correlated with salivary gland-derived sporozoite data (S1 Figure). A list 513 of proteins with its own sporozoite specificity score will be a valuable resource for studying sporozoite 514 biology and understanding novel protein function. [67]. Indeed as we have shown here both sporozoite specific proteins and proteins that elicit 544 antibodies are highly polymorphic, and only a fraction of those are conserved between NF54, NF135 545 and NF166 (Fig S6, S7). The correlation between immunogenicity and level of sequence conservation 546 suggest that antigenic drift plays a role in the sequence variation. It is not clear whether antigenic drift 547 would also be responsible for high variation among sporozoite specific proteins, as we did not observe 548 a correlation between the Bayesian score of sporozoite specificity and the immunogenicity. | 3,326.4 | 2020-06-18T00:00:00.000 | [
"Biology"
] |
Phase-Shift Cyclic-Delay Diversity for MIMO OFDM Systems
Phase-shift cyclic-delay diversity (PS CDD) scheme and space-frequency-block-code (SFBC) PS CDD are developed for multipleinput-multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems. The proposed PS CDD scheme preserves the diversity advantage of traditional CDD in uncorrelated multiantenna channels, and furthermore removes frequencyselective nulling problem of the traditional CDD in correlated multiantenna channels.
Introduction
It is well known that a multiple-input-multiple-output (MIMO) transmission system can provide benefits on throughput and reliability in multipath fading channels over single-antenna systems [1].These benefits are now being realized; MIMO transmission schemes have been adopted in most wireless standards including 3GPP long-term evolution (LTE), 3GPP LTE-Advanced, WiMax, and IEEE 802.16 m, where these standards are based on orthogonal frequency division multiple access (OFDMA).OFDMA became popular partly because it is sum-rate optimal for general single-input-single-output (SISO) channels, and the probability of OFDMA being optimal is nonnegligible for MIMO channels [2].
In particular, transmit diversity (TxD) schemes are utilized to realize the reliability benefits of multi-antenna systems in slow-fading environment without channel state information (CSI) available at the transmitter side, by providing multiple signals conveying the same information over different spatial channels.In OFDM-based systems, Alamouti space-frequency block code (SFBC) [3] and cyclic delay diversity (CDD) [4] are widely adopted TxD schemes for 2-transmit antenna diversity (2-TxD) systems.The SFBC scheme has advantages over the other TxD shemes; it is simple to encode at the transmitter side and easy to decode at the receiver side while still achieving the optimal uncoded diversity gain in 2 × 1 Rayleigh fading channels (i.e., with two transmit and one receive antennas).On the other hand, CDD with small delay is an attractive diversity scheme in OFDM systems, in a sense that it requires only one set of pilot signals, as opposed to the SFBC scheme where two sets of pilot signals are required.However, CDD has some drawbacks such as that it does not have uncoded diversity and thus the block-error rate (BLER) performance is worse than the uncoded diversity schemes like SFBC, and that it has frequency-selective nulls in antenna-correlated channels.
In this paper, we develop and analyze a new TxD scheme for OFDM systems, phase-shift CDD (PS CDD), which takes advantages from both SFBC and CDD, and at the same time mitigates the issue of frequency-selective nulls.The performance of the introduced transmit diversity scheme is evaluated through numerical simualtion results.
System Model
We consider a MIMO OFDM system, with M transmit antennas and N receive antennas, where M = 1, 2, . . .and N = 1, 2, . ... For each subcarrier k = 0, 1, . . ., N FFT −1, where N FFT is the FFT size for the OFDM system, a received signal is described as in the following equation: where y k ∈ C N×1 is a received vector, H k ∈ C N×M is a MIMO channel matrix, x k ∈ C M×1 is a transmitted vector, and w k ∈ C N×1 is an additive white Gaussian noise vector with mean 0, covariance matrix diag(σ 2 ).Here, C is the set of complex numbers.
International Journal of Digital Multimedia Broadcasting
In this paper, we focus on TxD schemes used for robust transmissions in various channel conditions, such as high-Doppler channels and highly frequency-selective channels.Such TxD schemes transmit only one channel-coded stream to ensure maximum reliability, while sacrificing spectral efficiency.We note that general TxD schemes may transmit multiple streams [1] and can also be used for multi-user setting [5] when spatial degrees of freedom of a MIMO channel are greater than one.Among this class of TxD schemes, Alamouti SFBC and CDD are two popular TxD schemes in OFDM-based wireless transmission systems.
Background: 2-TxD Schemes
Alamouti SFBC can be described in a 2-transmit and 1receive antenna system.For an SFBC transmission, two transmit signals at two adjacent subcarriers are paired, which we denote as x k and x k+1 .These two vectors are constructed in such a way that where s k and s k+1 are modulated symbols with variance 1 and P is total transmit power at each subcarrier.Under this construction, we are able to obtain an orthogonal system representation at the receiver side, that is, the system transfer matrix is orthogonal: where h 11 and h 12 are the channel coefficients between the receive antenna and each of the transmit antennas at subcarriers k and k + 1, with an assumption that the channel does not vary in the two subcarriers.Due to the orthogonal system transfer matrix, Alamouti SFBC is called an orthogonal TxD scheme.Utilizing the orthogonality property, we can detect s k and s k+1 , from the two Alamouti receiver equations as follows: From the receiver equations, one can easily verify that the received signal-to-noise ratios (SNRs) of the two modulated symbols are the same and equal to (|h 11 |2 + |h 12 | 2 )P/(2σ 2 ), which reveals the uncoded diversity gain of the SFBC scheme.
When the number of receive antennas is greater that 1, maximal ratio combining (MRC) would produce a received SNR of where h n1 and h n2 are the channel coefficients between receive antenna n and each of the transmit antennas at subcarriers k and k + 1.Note that when the number of transmit antennas is larger than 2, no orthogonal TxD schemes have been found achieving the full rate [6].One popular extension of SFBC for 4-Tx antenna transmitter is frequency-switched transmit diversity (FSTD) [4].For an SFBC-FSTD transmission, four adjacent subcarriers, k, k + 1, k +2, and k + 3 are grouped.On the first two subcarriers, that is, k and k + 1, one SFBC pair is transmitted on the first and the second antennas, while the third and the fourth antennas are turned off.On the third and the fourth subcarriers, that is, k + 2 and k + 3, another SFBC pair is transmitted on the third and the fourth antennas, while the first and the second antennas are turned off.SFBC-FSTD is easy to code and decode since it keeps the orthogonality property and achieves coded diversity across four transmit antennas.However, we need four pilot signals for demodulation of SFBC-FSTD, which may increase pilot overhead of a system.
On the other hand, CDD is a coded TxD scheme in an OFDM system, which can be designed for arbitrary number of Tx antennas.In two-transmit and one-receive antenna system, at subcarrier k, a transmit signal x k coded with CDD is where δ is a positive number called CDD delay (e.g., δ = 2π/N FFT ).Then, a received signal at subcarrier k is written as We also note that with CDD, a receiver needs to know only the composite channel h 11 + h 12 e jkδ for demodulation, especially when the delay δ is small so that the channel does not vary abruptly over subcarriers.This is one benefit of CDD over SFBC which requires knowledge of two channels.CDD can be easily extended to cases where the number of Tx antennas is grater than 2. For example, when the number of Tx antennas is 4, we have a transmit signal x k coded with CDD at subcarrier k as International Journal of Digital Multimedia Broadcasting
3
A well-known drawback of CDD is frequency-selective nulling.As CDD artificially increases frequency selectivity, in some subcarriers, the two component channels of the composit channel, h 11 + h 12 e jkδ , coherently add, while in some other subcarriers, they destructively add.This problem becomes severer when the two channels h 11 and h 12 are correlated, which occurs when the two transmit antennas are geometrically close.This particular issue of CDD prevented it from being accepted as robust TxD schemes in wireless communication implementations, despite the benefits of CDD.
Reviewing these two TxD schemes of SFBC and CDD, in summary, we realize that SFBC is robust but not extendable to systems with large number of transmit antennas, while CDD is easily extendable and requires only one pilot signal but not robust in correlated channels.In the sequel, we develop new TxD schemes taking the advantages of both schemes while still ensuring robustness in correlated channels.
Phase-Shift Cyclic Delay Diversity (PS CDD).
We recall that a major problem of CDD is nonrobustness in correlated channels when two terms from h 11 +h 12 e jkδ destructively add.When the number of Tx antennas is greater than or equal to 4, a simple variation of CDD may prevent frequency nulling from occuring.Instead of giving the same phase component on the signals transmitted in the four transmit antennas as in (8), we attempt to apply a phase shift of φ in the signal transmitted in the fourth antenna.With this phase shift CDD (PS CDD), transmit signal x k coded at subcarrier k is In thise case, the composite channel at the receiver's point of view is h 11 + h 12 e jkδ + h 13 e 2 jkδ + h 14 e j(3kδ+φ) .
To facilitate the analysis of performance of (9) in strongly correlated channels, we assume Accordingly, the composite channel becomes h 1 + e jkδ + e j2kδ + e j(3kδ+φ) . (11) To gain some insights on this approach, let δ = 2π/N FFT , φ = π, and N FFT = 1024 and compare the composite channel powers of CDD scheme (8) and PS CDD (9) normalized by |h| 2 , as shown in Figure 1.In the figure, we can see that CDD suffers from frequency-selective nulls at N FFT /4, N FFT /2 and 3N FFT /4, while PS CDD does not have frequency nulls.Recalling that CDD intentionally introduces frequency selectivity for additional frequency diversity, we want to have PS CDD that has a property of having good frequency selectivity, while not suffering from frequency nulls.We may characterize this goal by considering the following objective function: maximize φ (P max − P min )P min , where P min and P max are the minimum power and the maximum power, respectively, of a scheme across all the subcarriers.We find that (P max − P min )P min of the CDD and the PS CDD in Figure 1 are 0 and 1.6323, and hence under this objective function, PS CDD is better than CDD, in terms of both introducing frequency selectivity and keeping minimum power large.We note that the optimal φ with (9) can be found with numerical method.The most general form of PS CDD can be written as where we may optimize the performance with choosing parameters δ's and φ's.We also note that for demodulation PS CDD signal, we need only one pilot signal for the composite channel h 11 + h 12 e j(kδ1+φ1) + h 13 e j(kδ2+φ2) + h 14 e j(kδ3+φ3) .
Space-Frequency Block-Code with Phase-Shift Cyclic Delay Diversity (SFBC PS CDD).
In Section 4.1, we have introduced PS CDD that does not suffer from frequency-selective nulls, while keeping the coded diversity benefit of CDD and maintaining the required number of pilot signals to be one.
In this section, we combine Alamouti SFBC and PS CDD, so that a new TxD scheme can enjoy uncoded diversity while keeping some benefits of PS CDD.
International Journal of Digital Multimedia Broadcasting For an SFBC PS CDD transmission, two transmit signals at two adjacent subcarriers are paired, which we denote x k and x k+1 .These two vectors are constructed in such a way that s k e j(kδ1+φ1) s k+1 e j(kδ2+φ2) s k e j(kδ3+φ3) s k+1 With this construction, we obtain an orthogonal system of equations at the receiver side: where h 11 = h 11 + e j(kδ2+φ2) h 13 , h 12 = e j(kδ1+φ1) h 12 + e j(kδ3+φ3) h 14 . (16) From ( 14), (15), and (16), we see that SFBC PS CDD construction reduces to a 2-Tx SFBC scheme, which ensures easy decodability relying on orthogonal structure and allows us to have uncoded diversity.Furthermore, each of the effective channels h 11 and h 12 is constructed with CDD, which allows us to have coded diversity with intentionally introduced frequency selectivity and facilitates demodulation of SFBC PS CDD using two pilot signals for h 11 and h 12 .
Numerical Results
In this section, we present numerical simulation results of block-error rate (BLER, or frame error rate, FER) performance comparing the TxD schemes introduced in this paper and some existing schemes such as CDD and SFBC-FSTD.For the simulation, ITU typical urban 6-path channel model (TU-6) has been used and 120 km/hr is assumed for terminal speed.Furthermore, we consider a MIMO channel with 4-transmit and 2-receive antennas, where 4 transmit antennas are correlated with correlation coefficients 0.9 and 0, while 2 receive antennas are uncorrelated.For channel coding, 3GPP Turbo code [7] is used with code rate 1/3, and QPSK modulation is used.Channel-coded and modulated signals go through 6 distributed sets of 12 subcarriers in each time slot (or per block of time).For demodulation, perfect (or ideal) channel estimation is assumed.At the receiver, maximal ratio combing is used followed by perreceive-antenna SFBC decoder.Figure 2 shows BLER curves obtained with various TxD schemes under highly correlated channels whose correlation coefficient is 0.9.As we discussed earlier, CDD performs worse than the others.As PS CDD removes frequency nulls, the performance is better than CDD.Both SFBC-FSTD and SFBC-PSCDD perform the best among these four schemes.Considering the fact that SFBC-PSCDD requires only two pilot signals, SFBC-PSCDD can potentially achieve larger spectral efficiency than SFBC-FSTD.On the other hand, Figure 3 shows the BLER curves obtained under uncorrelated channels.In the uncorrelated case, SFBC-FSTD and SFBC-PSCDD show similar performance and outperform PSCDD and CDD.
Conclusion
In this paper, we have introduced phase-shift cyclic delay diversity PS CDD and SFBC PS CDD schemes.The proposed schemes treat frequency-selective nulling problem of traditional CDD.In particular, SFBC PS CDD takes benefits of both SFBC and PS CDD, and achieves robust block-error rate performance in both highly correlated and uncorrelated channels, while requiring only two pilot signals, as opposed to the well-known SFBC-FSTD requiring four pilot signals.
Figure 1 :
Figure 1: Comparison of powers of composite channels with CDD and PS CDD.
As we can see from the receiver equation, CDD does not give uncoded diversity, as received SNR is |h 11 + h 12 e jkδ | 2 P/(2σ 2 ).However, CDD in combination of channel coding across modulation symbols {s k } mapping to multiple subcarriers increases a coded diversity gain, as CDD increases the frequency selectivity of the composite channel: h 11 + h 12 e jkδ . | 3,378.2 | 2010-03-10T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Characterization of FGFRL1, a Novel Fibroblast Growth Factor (FGF) Receptor Preferentially Expressed in Skeletal Tissues*
Clones for a novel transmembrane receptor termed FGFRL1 were isolated from a subtracted, cartilage-specific cDNA library prepared from chicken sterna. Homologous sequences were identified in other vertebrates, including man, mouse, rat and fish, but not in invertebrates such as Caenorhabditis elegans and Drosophila. FGFRL1 was expressed preferentially in skeletal tissues as demonstrated by Northern blotting and in situ hybridization. Small amounts of the FGFRL1 mRNA were also detected in other tissues such as skeletal muscle and heart. The novel protein contained three extracellular Ig-like domains that were related to the members of the fibroblast growth factor (FGF) receptor family. However, it lacked the intracellular protein tyrosine kinase domain required for signal transduction by transphosphorylation. When expressed in cultured cells as a fusion protein with green fluorescent protein, FGFRL1 was specifically localized to the plasma membrane where it might interact with FGF ligands. Recombinant FGFRL1 protein was produced in a baculovirus system with intact disulfide bonds. Similar to FGF receptors, the expressed protein interacted specifically with heparin and with FGF2. When overexpressed in MG-63 osteosarcoma cells, the novel receptor had a negative effect on cell proliferation. Taken together our data are consistent with the view that FGFRL1 acts as a decoy receptor for FGF ligands.
Most bones of the vertebrate skeleton are formed by a complex process termed endochondral ossification which involves a cartilage intermediate (1). This intermediate represents a highly specialized connective tissue. It consists of a single cell type, the chondrocytes, which are embedded in a rich extracellular matrix (2). Typically, this matrix makes up more than 90% of the cartilage volume and consists of collagens (types II, IX, X, and XI), proteoglycans (aggrecan, small leucine-rich proteins), and glycoproteins (matrilins, COMP).
During the first step of endochondral ossification, mesenchymal cells condense and differentiate into chondrocytes (3). These chondrocytes proliferate rapidly and lay down the cartilaginous model of the future bones. The chondrocytes undergo a complex series of distinct developmental stages, including proliferation, maturation, and hypertrophy. The hypertrophic cartilage is calcified and becomes vascularized. Finally, the calcified cartilage is invaded by osteoclasts and osteoblasts, which replace the cartilaginous tissue by bone.
Cartilage has become a popular tissue to study cell proliferation and differentiation in vitro (3). When cultivated on plastic dishes, chondrocytes rapidly dedifferentiate into fibroblast-like cells. In three-dimensional lattices, however, the chondrocytes undergo the ordered sequence of events observed during differentiation and maturation of cartilage in vivo. Three stages of chondrocyte differentiation have been defined in vitro: proliferative chondrocytes producing mainly collagen II, hypertrophic chondrocytes producing collagen X, and osteoblast-like cells producing collagen I and alkaline phosphatase.
Many vitamins, hormones, and growth factors are involved in the regulation of chondrocyte proliferation and differentiation (4). The mechanisms, how these substances act on gene expression, however, are not yet understood in detail. Vitamin D 3 , ascorbic acid, and retinoic acid as well as the growth factors FGF, 1 TGF-, bone morphogenetic protein (BMP), and insulinlike growth factor (IGF) play critical roles during endochondral ossification. Indian hedgehog (Ihh) and the parathyroid-hormone related peptide (PTHrP) constitute a paracrine feedback loop that regulates differentiation of proliferative chondrocytes into hypertrophic cells in the growth plate of long bones. At the level of gene expression, the transcription factors Sox9, L-Sox5, and Sox6 play an important role in the determination of the cartilage cell lineage (5). In particular Sox9 appears to act as a key differentiation factor for chondrocytes analogous to the way that MyoD acts as a master gene during muscle differentiation.
We have recently set out to identify and characterize novel cartilage proteins by a subtractive cDNA cloning approach (6,7). Special emphasis was put on regulatory proteins that might play a role during chondrocyte proliferation and differentiation. Two cDNA libraries were constructed with mRNA from cartilage and subtracted with mRNA from skin or skeletal muscle. These libraries comprised many clones for known cartilage-specific proteins. In addition, our libraries contained several novel clones whose sequences have not yet been stored in public data banks. Some of the novel clones coded for a structural protein that was highly related to the family of the matrilins (6,7). Another set of clones was found to code for a novel transmembrane protein related to members of the fibroblast growth factor receptor family. Here we describe the characterization of this novel membrane protein and demonstrate that it represents a new player in the FGF signaling system.
EXPERIMENTAL PROCEDURES
RNA Extraction and Northern Blotting-Total RNA was isolated from various sources by the guanidinium thiocyanate method (8) utilizing the RNeasy kit from Qiagen (Hilden, Germany). When the RNA was prepared from cultured cells, the cell lysate was passed through a Qiashredder as suggested by the supplier (Qiagen); when the RNA was prepared from embryonic or adult tissues, the samples were homogenized in lysis buffer and extracted once with an equal volume of phenol/ chloroform. Following isolation on RNeasy columns, the purified RNA was resolved on 1% agarose gels in the presence of 1% formaldehyde and transferred to nylon membranes by vacuum blotting. Radioactively labeled cDNA probes were hybridized at 42°C to the membranes in a buffer containing 50% formamide as described (9,10). Following stringent washing, the membranes were exposed to x-ray film.
Subtracted cDNA Libraries-Poly(A) RNA was purified from total RNA by chromatography on oligo(dT)-Sepharose (Amersham Biosciences). Two subtracted cDNA libraries were constructed by the biotin/streptavidin/phenol method (10,11). In brief, the poly(A) RNA isolated from embryonic chicken sterna and from embryonic chicken control tissues (skin and skeletal muscle) was transcribed separately into double stranded cDNA and provided with specific adapters. An adapter with an internal SalI restriction site was used for the cDNA from sterna, whereas a biotinylated adapter without restriction site was used for the cDNA from the control tissues. The cDNAs were amplified by PCR with primers designed according to the adapter sequences. The cDNA preparation from sterna was then hybridized overnight at 68°C with a 10-fold excess of biotinylated cDNAs from the control tissues. All hybrids containing at least one biotinylated strand were removed by extraction with phenol in the presence of streptavidin. After a second round of hybridization and extraction, the material remaining in the aqueous phase was amplified by PCR to replenish the cDNAs from cartilage. The entire procedure was repeated a total of three times and then the products were ligated into the SalI restriction site of the cloning vector pUC13. The plasmids were transfected into competent bacteria and plated onto selective agar plates. The inserts from clones of interest were radioactively labeled and used as probes to screen a conventional cDNA library, which had been generated from the cartilage of embryonic chicken sterna (6,9). The DNA sequences of the inserts were determined by the dideoxy chain termination method. All sequences were analyzed with the software computer package of the genetics computer group at the University of Wisconsin.
In Situ Hybridization-In situ hybridization studies were performed essentially as described (9,12) utilizing labeled RNA probes. Samples from 17-day-old mouse embryos were embedded in paraffin and cut into serial sections. Riboprobes were transcribed from the mouse FGFRL1 cDNA sequence (nucleotides 661-1417), which had been cloned into the vector pSKϩ, with RNA polymerase T7 (antisense) or T3 (sense) in the presence of 35 S-uridine 5Ј-triphosphate. The probes were purified by gel filtration and separated on a 5% polyacrylamide gel to confirm size and purity. The tissue sections were digested with proteinase K and hybridized with the probes at a concentration of ϳ5 ϫ 10 7 dpm/ml for 18 h at 60°C. Following hybridization, the slides were treated with RNase A and washed in 0.1 ϫ standard saline citrate at 65°C for 2 h. The slides were then coated with NTB-2 emulsion, exposed for 3 days at 4°C, and developed with D-19 developer and fixer (Eastman Kodak Co.). Following staining with hematoxilin and eosin, the slides were inspected under a Nikon Eclipse E1000 microscope equipped with dark field optics.
Green Fluorescent Protein (GFP) Fusion Protein Expression-MG-63 and COS-1 cells were obtained from the American Type Culture Collection (Manassas, VA). The cells were grown under an atmosphere of 5% CO 2 in Dulbecco's modified Eagle's medium supplemented with 9% fetal bovine serum. The cDNA sequence of mouse FGFRL1 corresponding to amino acid residues 1-468 was subcloned into the SacI/BamHI restriction site of the GFP expression vector pEGFP-N1 (Clontech). Likewise, the sequence for the intracellular domain (residues 493-end) was ligated into the HindIII/KpnI site of the expression vector pEGFP-C3. The reading frame of the resulting constructs was verified by DNA sequencing. The plasmids (1 g/well) were mixed with Opti-MEM 1 (Invitrogen) containing 3 l of FuGENE 6 reagent (Roche Diagnostics) and added to cultivated cells that were growing on circular cover slips placed into 6 well plates (13). One and 2 days after transfection, the slides were inspected under a Zeiss LSM 410 confocal microscope. Analogous experiments were also performed with the human FGFRL1 sequence.
Protein Expression in Insect Cells-The extracellular portion of the human and the chicken FGFRL1 protein was expressed in insect cells utilizing the BacVector transfection kit from Novagen. The cDNA sequences corresponding to amino acid residues 25-355 of the chicken or 36 -368 of the human protein were ligated into the EcoRI/HindIII or BamHI/SacI restriction site, respectively, of the transfer plasmid pBAC-3. This transfer plasmid harbors the polyhedrin promoter and the sequence for the signal peptide of the baculovirus envelope protein gp64 to direct expression of proteins into the secretory pathway of infected insect cells. Furthermore, the fusion protein is expressed together with a His-tag that allows the purification of the recombinant protein on nickel affinity columns. Authenticity and reading frame of the resulting constructs were verified by DNA sequencing. The transfer plasmids (500 ng) were transfected together with the BacVector-3000 Triple Cut virus DNA (100 ng) into Sf9 cells that had been cultivated in Grace medium (Invitrogen) supplemented with 9% fetal bovine serum. Three days after transfection, viral particles of the supernatant were purified to homogeneity by the agarose overlay technique and amplified to a viral titer Ͼ10 7 /ml. Recombinant protein was produced by infection of fresh Sf9 cells growing in 75-cm 2 flasks with this viral stock utilizing a multiplicity of 1-4 plaque forming units/cell. Two days after infection, the supernatant of the insect cells was harvested, and the expressed proteins were purified by chromatography on nickel agarose (HIS-select HC nickel affinity gel, Sigma) and/or heparin-Sepharose (Amersham Biosciences) as suggested by the supplier.
Interactions of the Recombinant Protein-Recombinant protein (50 g) was loaded onto a small heparin-Sepharose column (bed volume: 0.5 ml, Amersham Biosciences) that had been equilibrated with 150 mM NaCl, 0.2% Triton X-100, 50 mM sodium phosphate, pH 8.0. The column was extensively washed with the same buffer, and bound protein was eluted with a linear gradient of 0.15-1.35 M NaCl in a total volume of 22 ml. Fractions of 0.5 ml were collected and checked for their NaCl concentration by measuring the conductivity with a WTW LF330 conductivity meter (Weilheim, Germany). Aliquots (32 l) from every third fraction were separated on a 10% SDS-polyacrylamide gel, transferred to a nylon membrane, and detected by immunoblotting with an antibody directed against the His-tag of the expressed protein.
To test for a potential interaction with FGF, human recombinant basic fibroblast growth factor was radiolabeled to a specific activity of 900 Ci/mmol (100 Ci/ml, Amersham Biosciences) using 125 I and chloramine T. Aliquots of the expressed protein (5 g) were incubated for 2 h at room temperature with 2.5 l of labeled FGF2 and 25 l of nickelagarose (Sigma) in 1 ml of 300 mM NaCl, 0.2% Triton X-100, 2 mg/ml bovine serum albumin, 50 mM sodium phosphate, pH 8.0. In some experiments, the binding reaction was competed with 5 g of recombinant basic fibroblast growth factor (Roche Applied Science) or with 10 l of fetal bovine serum. The beads were collected by centrifugation and washed three times with 1 ml of 300 mM NaCl, 0.2% Triton X-100, 50 mM sodium phosphate, pH 8.0. Bound proteins were eluted with 40 l of hot SDS sample buffer containing 2% -mercaptoethanol and analyzed on a 15% SDS-polyacrylamide gel, followed by autoradiography.
Cell Proliferation Assay-A cell proliferation assay kit from Roche Applied Science was used to analyze the effect of FGFRL1 on cell growth. MG-63 cells were grown on cover slips to 30% confluence. The cDNA sequences for chicken and mouse FGFRL1 were ligated into the eukaryotic expression vectors pcDNA3.1(ϩ) and pcDNA3.1(Ϫ) and transfected into the cells as described above. Following transfection, the cells were synchronized by starvation in medium lacking any fetal bovine serum. After 24 h the cells were stimulated to proliferate again by the addition of insulin (0.5 g/ml), FGF2 (5 ng/ml), and heparin (300 ng/ml) in Dulbecco's modified Eagle's medium. After 18 h, the cells were labeled with bromodeoxyuridine and fixed with cold glycine-ethanol buffer as described by the supplier of the kit. Proliferating cells were visualized by indirect immunofluorescence utilizing a monoclonal antibromodeoxyuridine antibody and a secondary, fluorescein-labeled antimouse Ig antibody. Cell growth was determined by counting immunoreactive cells in relation to the total cell number.
RESULTS
Cloning of Chicken FGFRL1-Our cDNA libraries that had been constructed with mRNA from chicken cartilage and subtracted with mRNA from chicken skin and muscle comprised more than 500 clones. About 300 individual clones were analyzed with respect to their insert size (200 -600 bp), their DNA sequence, and their hybridization pattern. One-third of these clones showed the expected behavior on a Northern blot: they were expressed in sternal cartilage but not in skeletal muscle (Fig. 1). A high redundancy was observed as the majority of the latter clones coded for type II collagen, the predominant collagen of cartilage. The remaining cartilage-specific clones coded for type IX collagen, type XI collagen, aggrecan, link protein, chondromodulin, and matrilin-3 ( Fig. 1). Finally, seven clones were found to code for short fragments of a novel protein.
The novel clones were utilized as probes to identify overlapping clones in a conventional cDNA library prepared from chicken cartilage. Our efforts led to the isolation of 14 cDNA clones that altogether spanned a cDNA of 6175 nucleotides (GenBank TM accession number AJ535114). This cDNA contained an open reading frame of 1461 nucleotides that could be translated into a novel protein of 487 amino acids with a molecular mass of 54,000 Da. Utilizing the information of the chicken sequence and various expressed sequence tags, we were able to clone the homologous proteins from man (Gen-Bank TM accession number AJ277437), mouse (GenBank TM accession number AJ293947), and rat (GenBank TM accession number AJ536020). The sequences for the human and the mouse protein have already been published as short sequence papers (14,15). A strong conservation of the amino acids was observed when the four protein sequences were compared (Fig. 2). The chicken amino acid sequence shared 74% sequence identity (81% sequence similarity if conservative amino acid replacements were included) with the human and 72% identity (78% similarity) with the rat sequence. The mouse sequence differed from the rat sequence only at 11 positions (not shown).
Computer predictions revealed that the novel protein represented a typical membrane protein that was highly related to the members of the FGF receptor family. We therefore termed the novel protein fibroblast growth factor receptor-like protein (approved gene symbol FGFRL1). Similar to the four members of the FGF receptor family (16), the novel chicken protein contained a signal peptide and a single membrane-spanning domain as well as three extracellular Ig-like repeats. Each of these repeats possessed two conserved cysteine residues that may be involved in the formation of a disulfide bridge. The three Ig-loops of the chicken FGFRL1 protein shared 39 -48% sequence similarity with the extracellular domain of chicken FGFR3 (CEK2).
At the intracellular side, the novel protein differed completely from all members of the FGF receptor family. FGFR1-4 are known to contain a cytoplasmic protein tyrosine kinase domain that plays an important role in signal transduction (16,17). This kinase domain was completely missing in the novel protein. Instead, FGFRL1 contained a short cytoplasmic tail of about 100 amino acids. This cytoplasmic tail showed a much lower degree of conservation among the four species examined than the extracellular domain with the exception of a histidinerich stretch at the C terminus (Fig. 2). This stretch revealed some similarity to the histidine-rich region of homeodomain proteins.
FGFRL1 during Evolution-A peculiar observation was made with the murine FGFRL1 sequences. Compared with the chicken and the human sequences, the rat sequence diverged in the intracellular, histidine-rich region at residue 475 and stopped after 54 unrelated residues (Fig. 2). If a single nucleotide would be deleted at this position, the reading frame would change to a highly similar sequence that would end after 21 residues with the motif Y-Q-C as found in the chicken and the human protein. Nevertheless, the frameshift is real, since it was confirmed in the mouse sequence. It is therefore likely that the murine FGFRL1 genes have sustained a frameshift mutation relatively late during evolution.
The complete genomic sequences of several organisms have recently been elucidated and deposited in public data banks. We therefore checked whether other species may also contain a receptor similar to FGFRL1. Neither the fruit fly Drosophila melanogaster nor the roundworm Caenorhabditis elegans possessed any gene that would give rise to a transmembrane protein with three related Ig-like repeats. A similar sequence, however, was identified in the genome of the pufferfish Fugu rubripes. This fish contained a gene with six exons that could be transcribed into a mRNA of ϳ3000 nucleotides and translated into a protein of ϳ500 residues (Fig. 2). The putative fish protein shared 67-73% sequence identity with FGFRL1 from chicken, rat and man. It should be noted that the fish sequence also ended with the peculiar histidine-rich region and the motif Y-Q-C as the human and the chicken sequence. Thus, vertebrates from fish to man contain a novel, homologous protein that belongs to the FGFR family. Lower animals, including insects and nematodes, do not appear to possess this protein.
Expression of FGFRL1-The expression of FGFRL1 was analyzed on Northern blots containing RNA from various chicken tissues (Fig. 3). Two bands of similar intensities corresponding to mRNA species of 7 and 4 kb were detected. The size of the larger band is consistent with the total length of our cDNA sequence. The shorter band might represent a mRNA species that was generated by utilization of an alternative polyadenylation site at position 3416 of our cDNA sequence. This notion is consistent with the fact that a radioactively labeled probe derived from the very 3Ј end of the total cDNA sequence hybridized with the 7-kb mRNA species but not with the 5-kb species (not shown).
The two mRNA species were detected in RNA preparations from the cartilaginous sterna of 16-day-old chicken embryos (Fig. 3, left). Very faint bands that became clearly visible after prolonged exposure were also noticed with RNA preparations from embryonic femur, skeletal muscle, and heart. In contrast, RNA preparations from skin, gizzard, liver, and brain did not reveal any signal. Fairly strong bands were also detected with RNA preparations from adult chicken sterna (Fig. 3, right). In this case, the signal obtained with the cranial portion of the sternum, which is known to contain many hypertrophic chondrocytes in a mineralized matrix, barely differed from that of the caudal portion, which contains proliferative and resting chondrocytes in a non-mineralized matrix. All the other tissues investigated from the adult animal (brain, gizzard, skeletal muscle, calvaria) revealed faint bands that became clearly visible after prolonged exposure. Thus, the FGFRL1 gene is expressed at fairly high level in cartilage and at very low level in many other tissues.
Tissue Distribution-The tissue distribution of the FGFRL1 mRNA was further investigated by in situ hybridization on whole body sections of 17-day-old mouse embryos (Fig. 4). Our antisense probe hybridized specifically with a mRNA in all cartilaginous structures. A relatively strong signal was observed in the nasal cartilage, the ribs, and the sternum as well as in the cartilaginous rudiments of developing bones such as the vertebrae and the pelvic bone. Strong expression was also observed in some muscular tissues, including the muscles of the tongue and the diaphragm. In contrast, no signal was detected in the eye, the brain, and the spinal cord. Moreover, the lung and most of the inner organs, including liver, stomach, intestine, and colon, showed very low signal. Hybridization of a consecutive tissue section with our sense probe showed very low background signal, demonstrating the specificity of our probe (Fig. 4).
The cartilaginous vertebrae of a 17-day-old embryo was investigated in greater detail (Fig. 5). Relatively strong expression of FGFRL1 mRNA was observed in the developing vertebral bodies. No differences were noted between the cranial and the caudal portion of the vertebrae. Regions of mineralizing cartilage containing hypertrophic cells showed substantially reduced signal. Likewise, the nucleus pulposus that would later give rise to the intervertebral disc revealed only a weak signal. All the tissues adjacent to the vertebrae, including the spinal cord at the dorsal part and the inner organs at the ventral part, reacted only weakly with our probe. Nevertheless, the signal at these locations appeared to be slightly stronger than the background observed with the sense probe (Fig. 5). These results are consistent with the view that FGFRL1 is FIG. 2. Alignment of the amino acid sequences from human (GenBank TM accession number AJ277437), rat (GenBank TM accession number AJ536020), and chicken (GenBank TM accession number AJ535114) FGFRL1. Identical residues are boxed. The sequence of FGFRL1 from the pufferfish Fugu rubripes as derived from the draft of its genomic sequence is included. Note that the signal peptide of the fish sequence is missing because the promoter of this gene has not yet been sequenced. The putative cleavage site of the signal peptide of the other sequences is indicated by an arrow. The three Ig-like repeats of the extracellular domain are indicated by brackets as is the transmembrane domain toward the C terminus. The four conserved glycosylation signals N-X-T are underlined. The mouse sequence (GenBank TM accession number AJ293947) differs from the rat sequence only at 11 positions and is not included. expressed at fairly high level in all cartilaginous tissues of the skeleton as well as in a few specialized muscles and at very low level in several other tissues.
Subcellular Localization-As a receptor for growth factors, the FGFRL1 protein should be located at the plasma membrane. To study the subcellular distribution, we fused the cDNA sequence for mouse and human FGFRL1 to the sequence for GFP and transfected the resulting constructs into human (MG-63) and monkey (COS-1) cells. When inspected by confocal microscopy, the majority of the signal emitted from GFP was found to be distributed along the plasma membrane (Fig. 6). Some signal could also be detected at compartments of the secretory pathway (Golgi, secretory vesicles), but virtually no signal was detected in the nucleus or the cytoplasm. Thus, the novel receptor is faithfully expressed from our constructs and inserted into the plasma membrane where it could theoretically interact with ligands.
A similar experiment was performed with the cytoplasmic portion of the FGFRL1 protein fused to GFP. After transfection of the corresponding construct into MG-63 or COS-1 cells, the fusion protein was found to be distributed all over the cytoplasm and the nucleus in a very diffuse fashion (not shown). The distribution could not be distinguished from that obtained with cells that had been transfected with a construct for GFP alone. Thus, the cytoplasmic tail of the FGFRL1 protein does not appear to interact with proteins of any specific subcellular site.
Interactions of the FGFRL1 Protein-To investigate a possible interaction of FGFRL1 with putative ligands, chemical amounts of the FGFRL1 protein were required. We therefore placed the chicken and the human cDNA sequence into a prokaryotic expression vector and expressed the extracellular domain of FGFRL1 as a fusion protein in Escherichia coli. Although we were able to isolate high amounts of fusion proteins from inclusion bodies, the purified proteins did not fold in a correct way as demonstrated by SDS-polyacrylamide gel electrophoresis. In the absence of reducing agents, the proteins formed large aggregates linked by disulfide bonds although there should be no free sulfhydryl group available after formation of the correct disulfide bridges in each of the three Ig loops (not shown).
FIG. 4. Expression of FGFRL1 in mouse embryos as revealed by in situ hybridization.
Consecutive sections from the whole body of a 17-day-old mouse embryo were hybridized with radiolabeled RNA probes for mouse FGFRL1 as indicated. The signal was detected by dark field illumination. The total length of the whole body section is 15 mm.
FIG. 5. Expression of FGFRL1 in the mouse vertebrae.
Consecutive sections from the vertebrae of a 17-day-old mouse embryo were hybridized with radiolabeled RNA probes for mouse FGFRL1 as indicated (Antisense, Sense) or stained with hematoxilin and eosin (HE). The signal was detected by dark field illumination. Bar, 100 m.
FIG. 3. Northern blot analysis of FGFRL1 expression in various chicken tissues.
Total RNA from eight embryonic tissues (16-day-old embryos, left), and six adult tissues (40-day-old animals, right) were resolved on agarose gels, transferred to nylon membranes, and hybridized with a labeled probe for chicken FGFRL1. An RNA preparation from embryonic sternum has also been included in the preparations from adult tissues as a reference. The signal of the 18 S RNA is shown at the bottom as a loading control. The migration positions of the 28 S and 18 S ribosomal RNAs are indicated in the margin.
FIG. 6. Subcellular distribution of FGFRL1 in cultivated cells.
The cDNA for FGFRL1 was ligated to the sequence for GFP and transfected into MG-63 and COS-1 cells. Two days after transfection, the expression of the fusion protein was inspected by confocal microscopy. The top panels show fluorescence emitted from GFP, and the lower panels show the same area by differential interference contrast (DIC).
We therefore utilized a baculovirus system and produced the extracellular domain of FGFRL1 in insect cells. The signal peptide of the envelope protein gp64 as well as a stretch of 6 histidine residues were included in the expression construct to facilitate purification of the secreted protein by affinity chromatography on nickel columns. With this system, we were able to isolate small amounts of the pure protein, which migrated on an SDS-polyacrylamide gel with an apparent molecular mass of 53 kDa (Fig. 7). This value is in good agreement with the calculated molecular mass of the fusion protein (54 kDa after glycosylation). The mobility of the expressed protein barely changed prior to and after reduction of disulfide bonds (Fig. 7), indicating that the cysteine residues did not participate in the formation of any unwanted intermolecular disulfide bond. It is therefore likely that the protein expressed in insect cells had adopted a more native conformation than the misfolded protein expressed in bacteria.
FGF receptors are known to interact with heparin or heparan sulfate chains. This interaction is believed to be crucial for the dimerization and subsequent signaling of the receptors (16 -18). We therefore investigated whether the chicken FGFRL1 protein might also interact with heparin. For this purpose, the recombinant protein was loaded onto a heparin-Sepharose column and eluted with a linear gradient of salt. As seen in Fig. 8, the protein bound specifically to the column and eluted as a relatively broad peak at an ionic strength corresponding to 600 mM NaCl. Analogous results were obtained with the human protein. Thus, there is a specific interaction of FGFRL1 with heparin.
Next, we studied the putative interaction of FGFRL1 with FGF ligands. FGF2, which occurs in virtually all tissues and which is known to interact with all conventional FGF receptors (16,17), was iodinated and analyzed on a polyacrylamide gel. Consistent with its amino acid sequence, the radioactively labeled growth factor migrated with an apparent molecular mass of 18,000 Da (Fig. 9). Our preparation contained a relatively high amount of low molecular material that had probably been created by unspecific degradation during labeling and heating of the sample. The radiolabeled growth factor was incubated with recombinant FGFRL1 in the presence of 0.2% Triton and 2 mg/ml serum albumin to block unspecific sites. Bound complexes were precipitated with nickel-agarose and analyzed, after extensive washing, by polyacrylamide gel electrophoresis. Labeled FGF2 was found to interact with the expressed FGFRL1 protein as demonstrated in Fig. 9. This interaction was specific, since it could be competed by the addition of an excess of unlabeled FGF2. The formation of the complex could also be competed by the addition of fetal bovine serum but in this case, a relatively large amount of serum was required to observe any effect. Thus, the recombinant FGFRL1 protein can specifically interact with FGF2.
Effect on Cell Proliferation-FGF receptors are known to control the proliferation and differentiation of various cell types. We therefore investigated whether our novel receptor might have any influence on the proliferation of mesenchymal cells. MG-63 cells were chosen for this experiment, because these osteosarcoma cells had shown the correct expression and segregation of the new receptor to the plasma membrane (see Fig. 6). The sequences of chicken and mouse FGFRL1 were cloned in sense or antisense orientation into a eukaryotic expression vector and transfected into MG-63 cells. After synchronization by starvation, the cells were stimulated to proliferate by the addition of insulin, FGF2, and heparin. Finally, the stimulated cells were labeled with bromodeoxyuridine and analyzed by immunofluorescence using an antibody against bromodeoxyuridine. Under these conditions, about half of the cells had started to proliferate as detected by the incorporation of bromodeoxyuridine into newly synthesized DNA (Fig. 10). When the cells were transfected, prior to stimulation, with our FGFRL1 sense construct, there was a pronounced reduction in the ratio of cells undergoing DNA duplication. No reduction was observed with cells that had been transfected with our antisense constructs or with the empty expression vector (Fig. 10). These results suggest that FGFRL1 has a negative effect on DNA synthesis and proliferation of living cells. DISCUSSION Utilizing a subtractive cDNA cloning approach, we have identified and cloned a novel cell surface receptor from chicken, mouse, rat, and man, which we have termed FGFRL1. Several lines of evidence suggest that this receptor represents a new member of the FGF receptor family: (i) the extracellular domain of FGFRL1 exhibits up to 48% sequence similarity with members of the FGFR family. In contrast, the sequence similarity with other surface receptors is minimal. (ii) The domain structure of FGFRL1 is highly related to the four members of the FGFR family that also contain three extracellular Ig-like repeats and a hydrophilic box separating the first and the second Ig-like repeat (16,17). (iii) When expressed by insect cells, FGFRL1 is able to interact with heparin. A similar interaction has been observed with all FGF receptors (16,17). As a matter of fact, this interaction has been used to isolate the receptors by affinity chromatography (19). Binding to heparin or heparan sulfate is believed to be crucial for the function of the FGFRs, since it induces the dimerization of the receptors (16 -18). (iv) FGFRL1 interacts with FGF2. This interaction is specific as it can be competed with an excess of the unlabeled ligand. Based on the specific radioactivity of our FGF2 preparation and on the amount of bound ligand, we estimated the dissociation constant K D of the FGF2⅐FGFRL1 complex to be ϳ0.6 ϫ 10 Ϫ8 M. Compared with the K D of the FGF2⅐FGFR2 complex (10 Ϫ10 M), this value seems rather low. The actual affinity, however, might be higher under physiological conditions in the absence of any detergent. In our experiments we had to include 0.2% Triton X-100 to prevent the loss of the recombinant protein by unspecific adsorption to plastic and glass surfaces. The affinity might also be higher in the natural, trimeric complex consisting of receptor, ligand, and heparin (18), since heparin will interact with both, FGF2 and FGFRL1. Nevertheless, we cannot rule out the possibility that FGF2 as used in our study does not represent the natural ligand for FGFRL1 and that the physiological ligand would have a higher affinity. At any rate, plasmon resonance experiments will be required to conclusively determine the K D value.
Two other research groups have recently described the cloning and sequencing of the same receptor-like molecule, which they have termed FGFR5 (20,21). Kim et al. (20) used the polymerase chain reaction in combination with degenerate primers to isolate clones for the novel FGFR5 from man. These authors provided evidence that FGFR5 is specifically expressed in the human pancreas. Sleeman et al. (21) identified clones for murine FGFR5 from an expressed sequence tag data bank. These authors demonstrated expression of FGFR5 in many different tissues from mouse and man, including kidney, brain, muscle, lung, and liver. It is difficult to compare these results with our expression data, because neither of the two groups included any samples from cartilage in their studies. We have tried to reproduce these findings with commercial as well as homemade Northern blots. In relation to the high expression of FGFRL1 in cartilage, we observed only weak expression in pancreas, kidney, and lung (14). 2 It is possible that differences in the sensitivity of the hybridization protocol applied may explain part of the apparent discrepancies. Sleeman et al. (21) used probes of high specific radioactivity prepared by the polymerase chain reaction, whereas we used conventional probes labeled by random oligonucleotide priming. It should also be noted that the strong signal detected by Kim et al. (20) in samples from human pancreas corresponded to a mRNA species of ϳ5 kb that was not found in any other tissue. This species must represent an unusual gene product that clearly differs from the human FGFRL1 mRNA of ϳ3 kb characterized in our study (14). Furthermore, no pancreas-specific expression is supported by data from serial analysis of gene expression (SAGE) published on the internet (www.ncbi.nlm.nih.gov/ SAGE). At any rate, our in situ hybridization studies with sections from mouse embryos clearly corroborate the preferential expression of FGFRL1 in skeletal tissues.
We believe that FGFRL1 does not possess any direct signaling function although we cannot rule out the possibility that it might signal in an as yet unknown way. Several lines of evidence may support our notion: (i) FGFRL1 does not contain any protein tyrosine kinase domain at its C terminus that would be required for signaling by transphosphorylation. (ii) The cytoplasmic domain of FGFRL1 does not appear to interact with any signaling partner. We have employed the yeast two-hybrid system to search for potential interaction partners encoded by cDNA libraries prepared from placenta and chondrocytes. These efforts did not lead to the identification of any meaningful clones. (iii) The intracellular tail of FGFRL1 does not appear to interact with any target at a specific subcellular site when expressed in vertebrate cells as a fusion protein with GFP. (iv) The murine FGFRL1 sequence exhibits a peculiar frameshift mutation in comparison with the human, chicken, and fish protein, which results in the incorporation of 54 unrelated amino acids at the C terminus of murine FGFRL1. Since this portion has not been conserved during evolution, it cannot serve a general, crucial function in all vertebrates.
We believe that FGFRL1 might rather have a modulating or inhibitory function in the FGF signaling cascade. It binds FGF ligands and sequesters them from other receptors. Furthermore, it might be able to dimerize, via its heparin binding site, with a true FGF receptor and inhibit the activity at the cytoplasmic domain. Here, the peculiar histidine-rich motif of FGFRL1 might play an active role. This motif could either form a complex with tyrosine residues of the true FGF receptors and prevent their phosphorylation or alternatively it could bind the phosphorylated residues after modification and prevent their interaction with downstream effector molecules. In either case, the FGF signal would be attenuated. Our preliminary studies may support this conclusion. When overexpressed in MG-63 cells, the novel receptor significantly reduced cell proliferation. This effect was observed with the sense construct but not with the antisense construct. A similar inhibition of FGF signaling was previously found with an artificially truncated form of human and chicken FGFR1. Ueno et al. (22) prepared an FGFR1 construct that contained the extracellular and the transmembrane domain but lacked the intracellular tyrosine kinase domain. When overexpressed in Xenopus oocytes, this construct inhibited signaling by the wild-type receptor in a dominant negative fashion.
Several physiological variants of FGF receptors that are incapable of signaling because they are lacking the protein tyrosine kinase domain have also been described in the literature (16,17). These forms are generated by proteolytic cleavage and/or by alternative splicing. Since they can still interact with ligands, they will inhibit FGF signaling. In contrast to these processed forms, FGFRL1 is the first receptor of the FGF signaling system with a transmembrane domain but without a kinase domain, which is transcribed from a separate gene.
The concept of a nonfunctional receptor that acts as a molecular trap for ligands is not unprecedented in biology but has been found in several growth factor families. The first example of such a decoy receptor that binds and sequesters the agonist but that is incapable of signaling has been described in the interleukin-1 signaling system (23). IL-1 receptor I possesses a cytoplasmic TIR domain, which plays a key role in the recruitment of an adapter protein. In contrast, IL-1 receptor II contains a short cytoplasmic domain lacking any TIR motif. All the experimental results accumulated so far are consistent with the view that IL-1 receptor II is a pure IL-1 decoy that blocks the IL-1 response. Similar decoy receptors have subsequently been identified in the tumor necrosis factor receptor family (e.g. osteoprotegerin, Fas death receptor) and in the chemokine signaling system (e.g. Duffy, D6). Furthermore, a pseudoreceptor has also been described in the TGF- signaling system (24). A transmembrane protein designated BAMBI has been identified that is related to TGF- receptor-I but lacks the intracellular serine/threonine kinase domain. This pseudoreceptor can associate with a normal TGF- receptor and inhibit TGF- signaling. Likewise, a receptor without signaling function has been found in the EGF receptor system and termed ErbB3 (25). Similar to the active receptor ErbB4, ErbB3 does interact with neuregulins, but it cannot pass on the signal because it is lacking the kinase activity. All these examples emphasize that it is a common strategy of nature to counteract the effects of growth factors and cytokines by the use of decoy receptors.
It is of interest to note in this context that decoy receptors appear to possess substantially lower affinities for the corresponding ligands than the cognate receptors. IL-1 receptor I has an affinity of 10 Ϫ10 for its ligands IL-1␣ and IL-1ra, whereas the negatively acting IL-1 receptor II binds these ligands at least 100 times less efficiently (23). Our FGFRL1 shows an affinity for FGF2 which is about two orders of magnitude lower than the affinity of FGFR1. A relatively weak interaction of FGFRL1/FGFR5 with FGF2 was also noted by Sleeman et al. (21). In fact these authors reported that the interaction was so low that FGFR5 was not able to compete with FGFR1 for ligand binding. Further experiments will be required to determine the significance of the relatively low affinity of decoy receptors in comparison with their cognate receptors. The FGF signaling system is used to regulate a variety of functions, including cell proliferation, differentiation, and migration (16,17). FGFRL1 might therefore be involved in the fine-tuning of several different processes. There is indirect evidence that the function might be linked with the formation of the vertebrate skeleton because FGFRL1 appears to be expressed in all vertebrates (man, mouse, rat, chicken, fish) but not in invertebrates (Drosophila, Caenorhabditis). Furthermore, FGFRL1 is expressed at relatively high level in the cartilage of skeletal tissues, but at much lower level in many other tissues. It might therefore be involved in the control of proliferation and differentiation of chondrocytes. We have investigated this possibility, but so far we could not find any evidence to support this notion. The chicken sternum has widely been used as a model system for chondrocyte differentiation (1,26). The cranial portion of the sternum, which is mineralized and which contains chondrocytes of a late differentiation state including many hypertrophic cells, showed an expression level of FGFRL1 similar to the caudal portion, which contains proliferative and resting chondrocyte of an early differentiation state. It is therefore unlikely that FGFRL1 will specifically control the differentiation program of chondrocytes.
Recent work with an FGFR-like molecule of planarians may provide some clues to the function of FGFRL1 in vertebrates (27). Planarians possess two FGF receptors (FGFR1 and FGFR2) as well as an FGFR-like molecule termed nou-darake.
Although the amino acid sequences of human FGFRL1 and the planarian protein nou-darake are not related, the two proteins exhibit a striking similarity of their domain structures. Noudarake has two extracellular Ig-loops, a single membranespanning domain and a C-terminal domain that lacks the kinase domain characteristic of FGFRs. Nou-darake is specifically expressed in the head region of the animals. Studies with RNA interference showed that the loss of function of nou-darake results in the ectopic induction of brain tissue throughout the body of the animal. The authors concluded that nou-darake may restrict growth of brain tissue to the head region by inhibiting diffusion of FGF to the rest of the body. Although our receptor-like molecule does not appear to be expressed in the brain, the planarian system can still provide some clues to a possible function of FGFRL1 in vertebrates. It is conceivable that FGFRL1 may restrict the diffusion range of FGF in an analogous way in regions where FGF expression and signaling is high. This would be the case in the vertebrate skeleton during development. Thus, FGFRL1 might trap FGF ligands and keep them within the border of this tissue to prevent the uncontrolled proliferation of adjacent cells in neighboring tissues. The tools are now available to investigate this possibility. | 9,815.4 | 2003-09-05T00:00:00.000 | [
"Biology"
] |
A novel socially assistive robotic platform for cognitive-motor exercises for individuals with Parkinson's Disease: a participatory-design study from conception to feasibility testing with end users
The potential of socially assistive robots (SAR) to assist in rehabilitation has been demonstrated in contexts such as stroke and cardiac rehabilitation. Our objective was to design and test a platform that addresses specific cognitive-motor training needs of individuals with Parkinson’s disease (IwPD). We used the participatory design approach, and collected input from a total of 62 stakeholders (IwPD, their family members and clinicians) in interviews, brainstorming sessions and in-lab feasibility testing of the resulting prototypes. The platform we developed includes two custom-made mobile desktop robots, which engage users in concurrent cognitive and motor tasks. IwPD (n = 16) reported high levels of enjoyment when using the platform (median = 5/5) and willingness to use the platform in the long term (median = 4.5/5). We report the specifics of the hardware and software design as well as the detailed input from the stakeholders.
Introduction
Parkinson's disease (PD) is a progressive neurodegenerative disorder caused by the degeneration of neurons in the substantia nigra, leading to a dopamine shortage (Lee et al., 2021) This results in a multisystem disorder affecting both motor and nonmotor functions, including gait disturbances, dyskinesias, rigidity, sleep impairment, and cognitive decline (Sveinbjornsdottir, 2016).The resulting impairments often develop to be highly debilitating, preventing participation in many activities of daily living (Lingo VanGilder et al., 2021).PD is the second most prevalent neurodegenerative condition after Alzheimer's disease, and its incidence is expected to increase to 8-9 million individuals in Western Europe by 2030 due to the aging of the world population aging (Dorsey et al., 2007;Driver et al., 2009).The median onset age of PD is 60-69 years old (Pagano et al., 2016) and approximately 10% of people with PD have young-onset PD (onset of symptoms is between 21 and 40 years of age) (Biddiscombe et al., 2020;Post et al., 2020).
Treatment methods for PD include the administration of drugs, such as levodopa (Pereira et al., 2019), and invasive surgery such as the implantation of deep brain stimulators (DBS) (Sveinbjornsdottir, 2016).However, the effect of both treatments decreases over time (Huot et al., 2013;Rizzone et al., 2014;Sveinbjornsdottir, 2016), and in the recent decades several complementary approaches have been studied, including physical exercise, music therapy and the "Train Big" approach and training of cognitive abilities such as the executive functions (Kalbe et al., 2020), as detailed below.Physical exercise in its various forms (including dancing and Tai Chi) has been recognized as complementary to traditional treatments in alleviating some of the symptoms of PD (Kurlan et al., 2015;Westheimer et al., 2015;Mak and Wong-Yu, 2019).Music has been suggested as a useful tool in accompanying rehabilitation exercises and has been shown to improve balance and gait stability as well to minimize anxiety and enhance wellbeing of individuals with PD (IwPD) (Pacchetti et al., 2000).Music can evoke both motor and emotional responses by simultaneously engaging multiple sensory pathways (Pacchetti et al., 2000).Musicbased therapy for people with Parkinson's disease is effective because it combines cognitive movement strategies, cueing techniques, balance exercises and physical activity while focusing on pleasure (de Dreu et al., 2012).Music Therapy (MT) has demonstrated the effectiveness of this approach on various aspects of affective, motor, and behavioral capabilities (Pacchetti et al., 2000).Cognitive training has been successful in improving the cognitive abilities of IwPD through, for example, computerized games (Petrelli et al., 2014) and online training (Petrelli et al., 2014;Fellman et al., 2020;Van De Weijer et al., 2020).
There is evidence that combining some of these approaches is also helpful.For example, combining movement (specifically, dance) training with music has been shown to improve cognitive skills and delaying the decline of executive functions and memory in IwPD (Pereira et al., 2019).Also, cognitive training before or concurrently with motor training has been suggested to optimize treatment outcomes for IwPD (Lingo VanGilder et al., 2021).
As the success of motor training relies on the adherence of IwPD to long-term exercise programs (Schootemeijer et al., 2020), effective interventions should motivate them to adhere to the training program.One approach could be to use gamified technologies, as has been implemented, for example, in cognitive training for healthy individuals (Cohavi and Levy-Tzedek, 2022).
Gamified exercise sets have been suggested as a means to encourage physical activity as they provide a pleasurable way of doing exercises (Adlakha et al., 2020).They have been shown to be effective as a complement to the traditional treatment of neurological disorders such as PD (Yuan et al., 2020).This has been the case in various contexts: gamification of rehabilitation treatments has been found to enhance the involvement and motivation of individuals after stroke to perform rehabilitation exercises (Zuki et al., 2022), to provide older adults with an opportunity to socialize with others while doing their exercises (Flores and Mataric, 2013), and to increase adherence in children with growth-hormone deficiency (Radovick et al., 2018).In the context of PD, several studies investigated the effects of gamified physical activity on parameters such as gait and cognition.For example, Pompeu et al. (2015) found an improvement of balance and gait performance with the use of Kinect games; Lopez et al. (2019) explored the acceptance of smart TV applications that successfully improved cognitive abilities of IwPD and individuals who suffer from different types of dementia; Cornejo Thum et al. ( 2021) implemented tele-rehabilitation with a treadmill that included virtual reality for patients with PD in their homes, and found that the system improved mobility and compliance with the training of the mentioned patients; Yuan et al. (2020) found that training with Virtual reality (VR) is effective in enhancing confidence in preventing falls for IwPD, as it improved posture and balance.
It has yet to be determined whether they are able to increase engagement in exercise for IwPD.To the best of our knowledge, a single experiment to date has employed a SAR for IwPD, and was aimed at helping sort medications for IwPD (Wilson et al., 2020).
To investigate the perceptions of key stakeholders regarding the potential benefits and applications of robotic technology in aiding IwPD, we conducted focus groups involving different stakeholder groups.These groups included clinicians (Bar-On et al., 2023), as well as IwPD and their family members (Kaplan et al., 2023).These studies explored the needs, attitudes, concerns, and ideas related to technological interventions, specifically SARs, to support the PD population.These studies served as a basis for the current one.
Our goal in the present study was to develop and test the feasibility of using a SAR platform we developed for cognitivemotor training by IwPD.Ultimately, we aim to improve symptom management of those living with the disorder.As a first step towards this goal, we built a prototype of a robotic exercise platform, and collected input from IwPD and their family members on its strengths and weaknesses.The development of the platform was informed by: 1) the focus groups conducted with IwPD, their family members and their clinicians Kaplan et al., 2023;Bar-On et al., 2023), 2) the Training Big approach (Farley and Koshland, 2005), whose primary goal is to counteract the motor impairments experienced in Parkinson's disease, such as slowness of movement and reduced range of motion; By stressing larger, exaggerated movements, the approach aims to enhance mobility, balance, and overall motor function (Farley and Koshland, 2005), and 3) the input we collected from stakeholders as the study unfolded through interviews, brainstorming sessions and questionnaires.To the best of our knowledge, this study is the pioneering effort to create SARs specifically for cognitive-motor training by individuals with Parkinson's disease, using insights gathered from these individuals, their family members, and their healthcare professionals.
The overarching working hypothesis is that the training on the gamified cognitive-motor tasks will improve the patient's clinical Raz et al. 10.3389/frobt.2023.1267458symptoms, with the robotic system serving as both the platform on which to practice, and as a motivating element.This will not be measured in the pilot experiments.The goal of the pilot experiments is to test the feasibility of using the device with IwPD, and to improve its usability prior to the larger-scale experiment to test our hypothesis.
Materials and methods
We built and tested the robotic platform using the participatory design approach.We designed it to help users perform motorcognitive exercises, with the use of mobile desktop robots, light cues and music.By responding to the cues that the system provides, the user would ultimately practice working memory, inhibition, and sustained attention.In the motor domain, the system allows increasing range of movement and practicing movement precision.
In this section, we first provide the details of the physical setup, and then detail the participatory design process, including two phases of a feasibility test with IwPD.In the Supplementary Materials (Sections 1,2), we provide an overview of the hardware and software that we developed for the robots, including their subcomponents, algorithms for providing visual and verbal cues, moving and localizing in space, and logging clicks (user presses on the robot).
Physical platform
The physical system setup can be seen in Figure 1.It includes a table, a comfortable sitting chair, a leg rest, and a small table for the speakers and the laptop.The game logic is implemented on the laptop, which communicates with the robots via Wi-Fi.The robots are equipped with microswitches to allow for pressing on them from the top (like a big push button).Linear motion shafts are connected to robot heads and used in conjunction with linear bushings and springs to facilitate accurate and linear clicking motion with minimal friction.Data from the camera, LED-matrix and microswitch are sent to the computer over Wifi.The prototype of the robotic platform, AutoClicker (AC), can be seen in Figure 2.
Game logic
Figure 3 illustrates, using a flow chart, how the game application works.The game logic describes the steps and conditions by which an exercise session proceeds from beginning to end.The user starts at a "main menu" screen and may choose one of three practice session types or view instructive tutorials for them.Before choosing a session, a user's name must be entered, or selected from a list of previous users to load the player's progress and customized parameters as well as to update the correct player data in a custombuilt database once the session is complete.Once a test has been automatically run to ensure that the robots are communicating with the PC application, the chosen session begins.Each session loops through a sequence of robot moves and light-ups followed by player clicks.Each session type has a unique stop criterion after which the game data are saved in a local database, and the user is returned to the main screen.
Experimental process: participatory design
The platform we present here is based on the results of focus groups held with IwPD and their family members (Kaplan et al., 2023) and their clinicians (Bar-On et al., 2023).One of the needs that transpired in those focus group discussions is for a robot to assist with cognitive and motor training.
In the process of developing and testing the platform we subsequently built, we collected and implemented input from various stakeholders, including IwPD, their family members and their clinicians.This process is depicted in Figure 4.
We first held in-depth interviews with three clinicians.One of the clinicians was then provided with a preliminary prototype of the system, followed by a working prototype (prototype 1.0 described below).Afterwards, a brainstorming session with 40 clinicians was conducted in parallel to phase one of a pilot feasibility study; The pilot included practice sessions of IwPD with the robotic platform prototype 2.0, as well as interviews of the IwPD and their family members.Input from the pilot study as well as from the brainstorming session was implemented into a revised version of the prototype (3.0), used in Phase 2 of the pilot feasibility study.The twophase feasibility test was conducted with a total of 16 IwPD and three family members.As a part of both phases of the pilot feasibility study, we conducted 15 semi-structured in-depth interviews with IwPD as well as with three family members.
Part 1-in-depth interviews with clinicians
In part 1 of the participatory-design process, we held indepth interviews with three clinicians: 1) an occupational therapist (OT1, Ph.D. candidate) at the institute for movement disorders and Parkinson's rehabilitation at a large rehabilitation hospital located in the center of the country, with 12 years of experience treating Frontiers in Robotics and AI 03 frontiersin.org
FIGURE 2
The AutoClicker mobile robot.The robot provides color prompts to the users and provides real-time visual and auditory feedback; It moves during some of the motor-cognitive exercises, such that users operate within a dynamically changing workspace.Left: the robot in its resting state (not pressed, no light cues); Middle: the robot lit up (green); Right: the robot is pressed down by the user.
IwPD; 2) A physical therapist (PT1, Ph.D) at the same institute with 12 years of experience treating IwPD; and 3) A physical therapist (PT2) with 14 years of experience treating IwPD at their homes and in groups.
The interviews were semi-structured and accompanied by slides, and were conducted via Zoom by author DR, a male Master's student in Mechanical Engineering.OT1 was not part of the research team at the time that the interviews were held, and later joined the team (author IB-C).
Part 2-first prototype and follow-ups with a clinician
Following the in-depth interviews, we designed a mock-up simulation of the proposed platform based on the input we received.In this simulation, moving dots resembling robots lit up in sync with the beat of a background song, while moving on a plane.We presented it to one of the clinicians (OT1) to get more input on the system.Based on the clinician's input, we created a working prototype (prototype 1.0) and presented it to OT1 to get more feedback.This prototype included a short demo of the King of the Bongos exercise described below, with a single moving robot instead of two.
Part 3-brainstorming session with clinicians
Once we had a working version of the updated prototype (prototype 2.0), we took it to a rehabilitation center in the periphery of the country (the Adi Negev rehabilitation center), where a wide range of patients, including IwPD, are treated.In order to collect feedback on the motor-cognitive exercise platform, we held a 1-h brainstorming session with the clinicians working in the rehabilitation center.The clinical team (n = 40) consisted of physiatrists, physical therapists, speech and language therapists, occupational therapists, psychologists, and social workers.The clinicians provided useful suggestions for improving the platform after we demonstrated its functionality.
This session took place in parallel with the first phase of the feasibility study.As a result of combining clinicians' suggestions with the feedback collected in phase 1 of the feasibility study, we increased the difficulty level of the exercises, created a host-like behavior for the robot, and improved the synchronization of the movements of the robots with the music in order to increase difficulty and variety.
The results section lists the feedback and suggestions received in parts 1-3.
Part 4-pilot feasibility studies with individuals with PD
Based on the suggestions we received from the clinicians, we designed the robotic exercise platform described in section 2.1 above.This system was then tested by IwPD to get the users' perspective on the platform.
In addition to IwPD, phase 1 of the feasibility tests also included clinicians, so that their feedback can also be used to improve the system's design.The pilot included a Registered Nurse (RN) and an Occupational Therapist (OT) who both work at the Soroka University Medical Center (SUMC).
Study procedures
A total of 16 PD patients and two clinicians used the platform in a laboratory setting, and provided feedback on any changes that should be made to the system design; they were recruited in two phases: in Phase 1 six IwPD and two clinicians provided feedback on the system; we made the necessary changes to the robotic system based on their feedback, and then collected feedback from another group of 10 patients in Phase 2.
At the beginning of the session, upon arrival to the laboratory, the cognitive assessment of IwPD was conducted using the Montreal Cognitive Assessment (MoCA) test.The evaluation was performed by author SBS, a female Master's student in the Department of Brain Sciences and Cognition, who holds a degree in psychology and has successfully completed the necessary online training for administering the MoCA test.The MoCA was not used for screening, but rather to provide a more detailed description of the characteristics of the participants.Each session lasted between 50-60 min.Of these, 25 min were dedicated to debriefing and using the exercise set, and 25-35 min for conducting the MoCA and answering the end-of-session questionnaires.
The platform offered three exercise types, each addressing a different aspect in motor-cognitive training, namely, motor control of the arms, working memory, inhibition and sustained attention (exercise sets "King of the Bongos" (KB), "Traffic Light" (TL) and "Simon Says" (SimS), respectively).
Each participant completed one run with each of the three exercises: I. King of the Bongos (3 min long)-the user chooses a song from a menu of songs.The song then starts playing and the robots light up in green in a rhythm which matches the beat of the song.The user is asked to press on the robots as they light up.The light fades out after 3 s or when the robot is pressed down.If the user presses the robots at the correct timing, they are awarded points based on their reaction time.Otherwise, no points are awarded.II.Traffic Light (2 min long)-the robots light up in two different colors-red and green.Whenever a robot color is lit green, the user must press it, and whenever the color is red, the user must not press it.Pressing a green-lit robot, as well as not pressing a red-lit robot, awards the player a score of 1 point.The difficulty level of the exercise increases if the player is awarded multiple points in a row.The higher the difficulty level is, the shorter is the duration between light-ups, as well as the light-up durations.
If the user fails to press a green-lit robot, or presses a red-lit robot, the score resets back to 0 and the difficulty level decreases by 1. III.Simon Says (3 min long)-this sequence-learning exercise is composed of pairs of two stages-the teaching stage and the recall stage (Figure 5).a.In the teaching stage, one of the robots (the Teacher) lights up in a sequence of colors.The length of the sequence depends on the difficulty level: at the simplest level, the sequence is two colors-long, and at each additional level another color is added to the sequence (up to a maximum of 5 colors).Once the teaching stage is over, the recall stage begins.
b.In the recall stage, each of the two robots lights up with a different color-one of which is the correct color in the pattern given by the Teacher.The patient must press the correct colors in the correct order to advance to the next level of difficulty.If they are successful, the level of difficulty will increase, and if not, it will remain the same.
After having trained using all mode types, the patient will be asked to choose the preferred mode to play for the final 3-min set.
Following the training sets, the participants were asked to fill a questionnaire about the rehabilitation platform-to rate their perceived level of engagement with the platform, its perceived benefits, areas for improvement, and their intention to use it in the long term.
In order to reduce potential bias in the users' perception of the platform (e.g., users may be inclined to choose the most recent exercise set as their final exercise), participants were semi-randomly assigned The experimental protocol was approved by the Human Subjects Research Committee of Ben-Gurion University, and all participants gave their written informed consent to participate in the study.
Outcome measures
We asked participants to rate their perceived level of engagement with the platform, its perceived benefits, areas for improvement, and their intention to use it in the long term.We also measured reaction time, and precision on the task.
Part 5: in-depth interviews with IwPD and family members
In parallel with the pilot feasibility studies, we conducted a total of 18 semi-structured in-depth interviews: 15 with IwPD who participated in the pilot study and three with family members of IwPD who participated in the pilot study.The aim was to capture the participants' perceptions and experiences (Denzin et al., 2017) with the SARs and how it may assist their specific needs; we were also interested in their willingness to use it at home or in the clinic.The study was approved by the university's ethics committee and the Helsinki committee of the Soroka Hospital.
The interviews were conducted shortly after participants took part in one of the pilot feasibility studies (ranging from immediately at the end of the session with the robotic platform to 2 weeks Frontiers in Robotics and AI 06 frontiersin.org
FIGURE 5
Simon Says exercise visualization.The first row shows the teaching phase, during which the "teacher robot" (the one on the right) lights up in a particular sequence (in this example, blue → green → yellow).The second row shows the recall phase, during which both robots light up in different colors, and the user should press either robot, once it is lit up with the correct color in the sequence (in this example, click the right robot for the blue and the green colors, and the left one for the yellow color).
after it, based on the availability of the participants and the interviewer).The place of the interview was determined according to the interviewee's preferences.Seven interviews took place in the laboratory immediately after the end of the experiment (of those, one was continued via zoom, and two were continued at the participants' home), nine took place on Zoom, and two in the interviewees' homes.
All interviews were conducted by author GV, a male bachelor's student in Sociology and Anthropology, with the guidance of author YLR, an experienced qualitative researcher.
Data analysis followed the stages proposed by Strauss and Corbin. (1998).In order to identify themes and concepts, we hand-coded the interview transcripts using open and in vivo coding techniques.We grouped and regrouped the resulting themes using axial coding, until the key categories were identified.We separately analyzed each key category in order to gain a deeper understanding of the participants' perceptions and experiences.
Parts 1-3 iterative design process
Supplementary Table S1 in the Supplementary Materials summarizes the results from parts 1 (in-depth interviews with clinicians), 2 (prototype 1.0 and follow-ups with a clinician) and 3 (brainstorming session with clinicians).The feedback we received from the participants is categorized as follows: 1) User interface and experience (e.g., the need to simplify the information displayed on the screens so they are not distracting); 2) Customization and flexibility (e.g., the need to give a choice of multiple songs for added variability); 3) Design and aesthetics (e.g., suggestions on the physical design features of the robots and their gestures); 4) Music and audio feedback (e.g., add a "good job"/"oops" sound for immediate auditory feedback); 5) Exercise modes (e.g., the need to slow down the pace of the exercise); 6) Data visualization (e.g., an emphasis on the need for graphic visualization for clinicians); 7) Experimental design (e.g., ideas for adjustments to the inclusion and exclusion criteria); 8) Safety (e.g., ensuring the experimental setup does not pose a risk of falling); and 9) Ideas which we did not implement (e.g., specific suggestions on how to use the robot for speech-language therapy).
Part 4-pilot feasibility study with individuals with PD
In the following sections we detail the responses from participants in the two phases of the feasibility study, regarding the platform and its potential use in the short and long term.The baseline characteristics of the IwPD who took part in phases 1 and 2 of the feasibility study are listed in Table 1.
Phase 1-pilot study with 6 individuals with PD and 2 clinicians (RN + OT)
Eight overarching themes emerged from the users' responses on using prototype 2.0: Frontiers in Robotics and AI 07 frontiersin.org3.2.1.1Physical setup (display size, comfort, etc.) Two participants referred to the comfort of the chair and its distance from the exercise table (which suited one but not the other).Four noted that the setup is too big and cumbersome to be used in a private home.
3.2.1.2Robot physical design (clickability, human-like design features, etc.) Two participants stated their preference for robots that have added design features, (e.g., with an added mustache to make it more amusing and appealing, or designed as pairs-such as a boy and a girl); they also suggested to completely hide the electronics behind the casing.One participant felt that the robot's tall height was beneficial for training the upper body but is too big for a finalized industrial product.
3.2.1.3
The tasks (difficulty, variability, engagement, etc.) Two participants found the background music in the Simon Says exercise to be distracting, and one found the exercise to be stressful.Four thought the exercises were not sufficiently challenging, two of them said that they would use a system with more challenging levels at home.Two participants thought the robots should move more and at different speeds and patterns in the King of the Bongos exercise to increase engagement.
3.2.1.4
Reasons for choosing final task (difficulty, potential improvements, weak spot, etc.) Three participants who chose TL said the exercise was simple enough to be clear and engaging.One participant who described herself as competitive chose Simon Says to prove to herself that she can improve and win albeit the high difficulty.Two participants chose KB since the music and the spatial movements helped them engage and focus throughout the exercise.
Engagement in the short term
Two participants expressed their liking to the music and the engaging spatial movement of the robots in the KB exercise, and five of the six participants marked it as their most liked exercise (Figure 10).One participant described the TL exercise as "positively tiring" for the hands, especially the dominant one.Another said the movement in the KB exercise improved the attention span and increased engagement in the game.
Engagement in the long term
Four participants replied they would use the platform at home, one replied that it was not challenging enough and another that it was not exciting enough.Four participants explained that the system needed to be more engaging and variable to maintain motivation to use it over a prolonged period of time.Two of them suggested that more challenges or competition with other users would be necessary to maintain motivation over time.According to one participant, external motivation is necessary in order to begin practicing with the system at home.One participant indicated that she would be very interested in using the system over the long term.3.2.1.7Social aspect of the platform One participant expressed a desire for the robots to greet him and to instruct him on how to perform the different exercises (rather than have the research team do it).Two other participants also suggested adding a competitive mode, which can be played with other participants in person or through the web, as a way to increase motivation.A fourth participant stated that the mere act of coming to the lab and meeting its members made her feel like a part of a family and want to come again.
Phase 2-pilot study with 10 individuals with PD
Eight overarching themes emerged from the users' responses to the questionnaire regarding the use of the prototype 3.0:
Physical setup
It was suggested by three participants that future prototypes should be smaller.One participant requested a larger screen for displaying the game score.
Robot physical design
Three participants suggested that the robot's clicking mechanism should be strengthened.Two thought the robots should be smaller and shorter, and another thought the robots should be closed off to hide the electronics behind the casing.
Frontiers in Robotics and AI 08 frontiersin.org
The tasks
Participants felt that the SimS tutorial required better explanations and perhaps a brief demonstration.The SimS game was deemed to be too difficult by one participant.On the other hand, two others expressed a desire for all exercise sets to be more challenging.According to one participant, the robots moved too far apart and were difficult to track in the KB exercise set, whereas another thought they should move throughout all three exercises.
Reasons for choosing final exercise set
Figure 11 shows the exercise sets that participants chose as their final exercise session in the two phases.Eight out of ten participants in Phase 2 selected the KB exercise.It was chosen by one participant because it amused him, by two participants because they enjoyed the music, and by another participant because it required a lot of movement as well as hand-eye coordination.Others felt it was intuitive and simple to play as a final exercise.
Engagement in the short term
Three participants found the exercises to be engaging, mesmerizing, and full of action.Another participant commented that, despite the exercises not being challenging, he enjoyed participating in the experiment and getting to know the lab members.One participant thought the music helped alleviate the shaking in his hands, and another noted that the system assisted him in moving the hand more affected by PD as the session progressed.His granddaughter, who accompanied him to the experiment, said afterwards: "I have never seen my grandfather active like this, I was in tears."
Engagement in the long term
A total of eight participants expressed an interest in engaging with the platform in the long term.One participant conditioned his participation on a system with smaller robots, and another on a more challenging exercise.Conversely, a participant residing near the laboratory preferred using the platform on-site rather than at home.Two other participants noted that certain households do not have suitable tables or computers, and that future systems must be user-friendly in order to be successful.According to one participant, a greater musical variety is needed over the long term.A second suggestion was to add more difficulty levels to the exercises, and a third suggested awarding prizes for high scores to increase motivation to practice.
Social aspect of the platform
Two participants suggested that the robots greet them in a friendly manner and recognize them.One participant indicated that he would prefer to train with a human, while another stated that he would prefer to practice with three robots rather than two.
Suggestions and observations
Two participants noted that it would be more beneficial to practice standing up, rather than sitting down.
Outcome measures
Figure 6; Figure 7 show the average participant reaction time and success rates, respectively, for all three exercise sets.It appears there was a trend of increase in reaction time from Phase 1 to Phase 2 in the KB and the TL exercise sets (KB: 1.2 ± 0.1 s vs. 1.4 ± 0.3 s; TL: 1.1 ± 0.2 s vs. 1.3 ± 0.3 s; SimS: 2.2 ± 0.7 s for both phases).Success rates were maintained in the KB exercise (87% ± 9% in Phase 1% vs. 88% ± 11% in Phase 2), and increased in the SimS exercise (76% ± 23% vs. 88% ± 14%).There were opposing trends within the TL exercise: the success rate for clicking the green lights when they turned on went down (94% ± 4% vs. 84% ± 10%), whereas the success rate for avoiding clicking the red lights when they turned on went up (96% ± 5% vs. 100%).
These trends were not statistically evaluated due to the small number of participants.
Figure 8 shows the maximal difficulty level and mode difficulty level reached for the TL exercise as well as the maximal difficulty reached in the SimS exercise.Figure 9; Figure 10; Figure 11 show the participant responses to the questionnaires in both phases.
Part 5: in-depth interviews with IwPD and family members
Interviews revealed that a significant struggle faced by IwPDs is keeping social ties with other IwPDs and the rest of society, who do not necessarily understand what IwPDs cope with.Therefore, when designing a system to help with PD treatment, it is important to look at the social factor and consider how SARs may contribute to IwPD's ability to create and maintain social connections.We identified five themes relevant to the social factor and how they impact IwPDs' acceptance of the robot and their willingness to use the assistive robot in their home environment.These include the participant's conciliation with their diagnosis, the participants' illness stage, age, familial status and the robot's potential influence on the caregiver's routine.
Participant's conciliation with their diagnosis
Analysis revealed a connection between participants' willingness to use the robot and their conciliation with their disease.Those who expressed hardships and who tended to hide their diagnosis (e.g., trying to hide their symptoms when in the presence of others) were keener to use the robots in their home environment.They were not keen to use the robots publicly and stated that they would not come to a rehabilitation center to use them.
• "When I walk with a group, I wrap my hand with a long scarf, that way no one sees if my hand is trembling … No one knows I have PD … The robots can be in my house and in a public center, but I would prefer to have it in my house".(Olivia, age 68) • "When I was diagnosed, I found it difficult to accept it, and I still find it difficult now … I think the robots will be most effective in the house" (Alex, age 72) On the contrary, others, who expressed acceptance of their illness, were more willing to use robots, regardless of their location.They were more comfortable and even preferred practicing with other people rather than by themselves: Frontiers in Robotics and AI 09 frontiersin.org• "If the robot would cause four, eight, or even ten people to sit and play together, for me, it would be wow!".(John, age 69) • "It could be nice if you connect it to social media and play against others … a type of competitive game".(Maria, age 48) • "It should bring a few people; even hold a small competition and practice together." (Robert, age 76)
Participant's illness stage
While participants' conciliation with their diagnosis and their illness stage do not necessarily coincide, our data reveal a connection between participants' illness stage and their willingness to use the robots further.Those who were diagnosed more recently and experienced initial symptoms felt that the robots were not relevant for them in dealing with their symptoms: • "I am not sure that I am the best prototype user because I am almost fully and independently functional".(Maria, age 48) • "In my condition today, I do not feel that it is relevant for me".
(Charles, age 72).• "As an IwPD today, I did not feel challenged [during practice with the robots]".(James, age 47) However, participants that indicated they suffer from severe symptoms, were keener to use the robots.They felt that the robots were more relevant to both their physical practice, cognitive abilities and their social activities: • "I will benefit from this technology at home" (John, age 69).
"I think it is relevant … I will use the robot at least once a day at home" (Mark, age 71)
Age
Our findings indicated that the willingness to use the robot was closely connected to age.Out of sixteen participants, thirteen of whom were at the age range 63-76, three were younger (in their forties).The younger ones repeatedly emphasized that the robots are not relevant to them, and they found them to be not challenging both in terms of cognitive level and in terms of motor abilities: • "I am not sure I am the best prototype user since I am at an almost fully functional level.However, for people who struggle daily, I think it can be useful".(Maria, age 48).
Those who were in a later stage of the disease were all above the average age of the participants, and as previously mentioned, they felt the robots were relevant to them.They stated they could benefit from the practice both in a clinic as a means to meet with others and at home as a treatment tool.
• "If I had space in my house, I would want to use the robots to practice … Although practicing with people is not in my character, I will try.I think the best way to practice with people, is through the internet" (Richard, age 69) • "I will use it with other people, even in a competition with ten people".(Robert, age 76) • "I like being at home, so I would use it at home.My wife will make sure I will use it every day".(Oliver, age 75) It is possible that the difference along the age continuum in fact reflects a difference in disease stage, which we did not record in this study.While we did not collect information about the participants' disease stage directly, there is a moderate correlation between the age and the years since PD onset in the participants of this study (R = 0.41, p = 0.11; see Table 1).
Familial status
The younger the patients were and the more occupied they were with caring for young children, the less they were motivated to use the robots.For older patients, in contrast, practicing with the robot actually reminded them of their grandchildren, and they expressed a positive perception of using a robot: • "It can be nice with the kids, not only IwPD … with grandchildren, it is a competition" (Emma, age 63).
Frontiers in Robotics and AI 10 frontiersin.orgHence, they enunciate the potential social effect of using the robot, vis-à-vis the younger generation, by, for example, showing their grandchildren their relevance and connection to novel electronic devices.
The robot's potential influence on the caregiver's routine
Although we conducted only three interviews with family members who serve as the participant's main caregivers, our findings reveal the potential impact of the SARs on alleviating the load and burden they face on a daily basis.They highlighted their challenges of always having to be alert, be constantly present and assist with daily tasks.Thus, when thinking of the SARs as a treatment tool that participants can hopefully use by themselves, they stated it could provide them time for rest, both physically and mentally: • "It gives you peace of mind that there is something else, which is not you, that can help him [William] progress".(Lisa, spouse of William) It is important to note that we held only three interviews, and so further research, which focuses on FM, is required in order to understand the benefits these robots can have on their routine.
Practical aspects regarding the use of the platform
Interviews analyses revealed two sub-themes concerning the system's usage at the participants' houses and treatment centers.The first is the simplicity of the user interface, and the second is the speech factor.
User interface
Beyond the robots' design, an easy-to-use interface that lets one operate the system seems crucial to participants.when asked about their experience in the laboratory, participants responded that the system worked well.However, they also stated that it had to be more interactive: Other older participants also stated they fear using technology and expressed concerns about the complexity of the system and about using it by themselves if the robot will be placed at home: • "How will I be able to activate it by myself?" (William, age 68) It exemplifies participants' fear of having to use the system at their homes without assistance.Participants also explained that the system needs to articulate and demonstrate the assignment, thus making it easier to use in laboratories or treatment centers.
The robots' speaking abilities
Participants were more willing to accept robots who could speak with them.They expressed a desire to have a conversation with the robots during training, to be greeted and receive encouragement from them, and to have their questions answered by the robots.Participants referred to the ability to speak as a crucial factor in their sense of connection to the robot: • "It would be nice to talk to him, and he can help me practice … my wife will find tasks for him in the house".(William, age 68) • "I would love it if I had a robot that talks to me and understands what I say" (John, age 69) The robot's speaking abilities were also highlighted as attractive by family members.For example, Lisa, the spouse of William, said: "[the robot] reminds him to drink or take his medicine.He should become a part of the IwPD's daily routine." Hence, attributing human qualities to robots can help people connect with them and encourage them to practice more, thus, transcending the boundaries of the robot as a medical tool.Participants explained that the system has to be more interactive in order to make the user feel comfortable about using it: Frontiers in Robotics and AI 11 frontiersin.orgThe difficulty level in the Simon Says exercise (right) shows the maximal difficulty level that participants reached (since the difficulty level for this exercise only increased based on user performance, but did not decrease).
FIGURE 9
Participants' rating of their experience, on a scale of 0-5.The bars denote the median response, and the dots represent the individual responses of the participants (blue for Phase 1 and orange for Phase 2).
• "When there is an interaction with the computer, simple explanations … it is easier and contributes more to the practice" (Emma, 63).• "I would maybe give him [the robot] a human face, a smile.
Something that will be nice … Maybe if the robot could talk, it would be more human".(Nicole, Spouse of Oliver) Moreover, the presence of the laboratory staff, and the extra explanation they provided, proved to be important to people's ability to fully understand the instructions.Therefore, providing more interactive speech abilities and explanations seems crucial to make the system accessible to the users, especially if it is intended for use at home.
Discussion
In this study, we developed and tested a social robotic platform for cognitive-motor exercise using the participatory design approach with input from a total of 62 stakeholders (IwPD, their family members and clinicians).The iterative process we employed, along with a mixed-methods approach, enabled us to repeatedly improve the design of the platform, such that it better matches the needs of the users.The users in the two parts of the feasibility study reported high levels of enjoyment (median = 5) and of willingness to continue training with the platform (median = 4.5).While these numbers show the general trend, the youngest users in each phase (S1P5, aged 45; S2P1, aged 40) were found to be outliers in terms of their enjoyment using the system (using the inter-quartile range method) reporting level of enjoyment of 2 and 1 accordingly.S1P5 was also found to be an outlier with a reported level of 0 willingness to use the system at home.
In accordance with principles of effective rehabilitation (Maier et al., 2019), we implemented the following aspects in the training: repetitive and goal-oriented practice, variable difficulty, and rhythmic cueing.The exercises simultaneously trained cognitive and motor aspects within a gamified environment, and the difficulty levels of the exercises was automatically adjusted based on the user's performance, and logged along with success rates.
In the cognitive domain, the system aimed to assist users in training their ability in the areas of inhibition, short-term memory, and sustained attention.IwPD often experience difficulties in effectively filtering distractors (McNab and Kleinberg, 2008;McNab et al., 2015), which is a critical component of their working memory capacity.In the motor domain, the platform required extended reaching movements of the shoulder, predominantly flexion, horizontal adduction, horizontal abduction, and protraction arm movement, specifically encouraging movements exceeding 90°.These are crucial for performing activities of daily living (ADL) such as combing one's hair and washing one's back (Triffitt, 1998).
According to the definitions put forth by Fasola and Mataric. (2013), "a socially assistive robot (SAR) is a system that employs hands-off interaction strategies, including the use of speech, facial expressions, and communicative gestures, to provide assistance in accordance with a particular healthcare context".Our robots' utilization of speech (e.g., "Let's start playing!") and movement that invites the user to interact with them aligns with Fasola and Mataric's characterization of SARs.Throughout the session, the robots are able to adapt the difficulty level based on patient performance, as well as mimic a social dancing gesture by moving in synchronization with songs.These illustrate their socially interactive nature.
Additionally, our robots' functionality aligns with another core function of assistive robotics -providing assistance to users in diverse contexts, such as rehabilitation healthcare.Our robots' ability to provide cognitive-motor exercise, while monitoring the user's performance, providing feedback to the user, and displaying performance levels over time, to be used by patients and clinicians in the future, is in line with Feil-Seifer and Mataric's (2005) definitions of SARs as "the intersection of Assistive Robots and Socially Interactive Robots".To the best of our knowledge, this is the first SAR platform specifically developed for use by IwPD.
The results of Phase 2 of the feasibility study, compared to Phase 1, suggest an increase in challenge (as hinted to by the trend for slower reaction times in two out of the three exercise sets (Figure 6), and increased time spent at lower difficulty levels in both the KB and TL exercise sets (Figure 8)) without affecting the median enjoyment level (Figure 9).This suggests that the most updated prototype provided a suitable level of challenge (not too easy on the one hand, and not frustratingly challenging on the other) for the participants and maintained a balance between difficulty and engagement.
Motor, cognitive, musical and social elements were brought up by clinicians in all parts of this experiment, requested by IwPD in Phase 1, and implemented in the system upgrade between Phases 1 and 2. These factors likely contributed to the higher long-term acceptance of the system in Phase 2 compared to Phase 1 (80% vs. 67%), the lower level of reported frustration in Phase 2 compared to Phase 1 (Figure 9), as well as to the statements made by two participants, that their symptoms were alleviated after exercising with the platform.
The experimenters anecdotally noted that some of the participants danced, sang along with the music or whistled during training.They also noted the participants responded to the auditory feedback given by the robots -often cheering when the robot cheered, or talking back to the robot in case they received negative feedback.Participants who were able to safely do so stood up during the training session, which appeared to indicate a high engagement level.
Based on the in-depth interviews conducted in Section 3.3.2,two adaptations can be considered for future versions of the system.First, creating an interface that lets participants easily activate the system by themselves.This can increase the individual's sense of capability, which for many IwPD is significantly decreased when facing the many struggles that appear with the disease (Prenger et al., 2020).Second, to emphasize the sociability factor of the robot so people can use it with other human partners while practicing.
Other suggestions that were not implemented in the current version of the system are a learning algorithm that learns the songs the patient likes, in order to maintain novelty and interest, adding a sing-along mode for a higher level of difficulty which also improves breathing and vocal cord strength and a mode requesting a specific clicking hand for crossover training and design the setup on a telescopic table to train the legs and work on balance.
Design insights for technologies for IwPD
The feedback we collected in this study on a system for IwPD has shed light on design aspects, which we anticipate may be helpful beyond this specific platform, for other technologies designed for IwPD, and potentially also for technology design for healthcare more broadly.We identified the following aspects: 1) the size of the platform appeared to be a crucial factor in the willingness to use the technology in the long term in the home setting; this was brought up by half of the participants; 2) the simplicity of the interface (Feingold Polak and Weiss, 2023) and of the practice explanations was important to enable most IwPD to understand the tasks and to be able to participate effectively; 3) the novelty effect is an important issue to consider with a platform designed for long-term use: this is a topic that the participants in this study brought up themselves, and is a crucial aspect of successful adoption of the technology; 4) the social aspect of the interaction with the robot seemed to be an important motivational factor, as evidenced by the repeated requests by participants to add competitive exercise modes with other participants, additional robot speech capabilities, and a more animated physical design; 5) the participatory design approach, in which the intended end users of the system-as well as members of their support network-provide feedback on it, to inform the iterative design process, did not only mean that we were able to design a platform that better suited their needs, but also helped the participant recruitment effort, since our participants indicated they were highly motivated to contribute to the improvement of technological solutions for their community.Indeed, when reflecting on the entire process, we were struck by the difference between our initial suggested design and the actual prototype 3.0, which stresses the effectiveness of the participatory design process.
Study limitations
In the feasibility study, our participants interacted with the platform during a single session, which restricted our ability to assess the potential long-term improvements in cognitive and motor abilities when using the platform.Furthermore, conducting the experiment under the conditions of a controlled laboratory setting, rather than in the participants' home environment or clinical settings, might have impacted participants' perception of the system.
Conclusion
We developed and tested a SAR platform for cognitivemotor exercise by individuals with IwPD.The system incorporated gamified exercises to train cognitive and motor abilities.Feasibility testing with 16 IwPD showed positive engagement and motivation to use the platform in the short term.The platform is novel and unique in its targeted approach of using SARs for IwPD and incorporates interactive gamified elements.In the long run, it aims to improve cognitive and motor functionality in IwPD to potentially help manage some of the PD symptoms.The study assessed technical requirements, user acceptance, operational challenges, and safety considerations.Participants reported high enjoyment levels, and willingness to continue training.Future directions include longterm studies to assess the system's impact on cognitive and motor abilities over time.The platform is designed to complement and support the work of clinicians by providing engaging and motivating training for IwPD.
FIGURE 1
FIGURE 1The physical setup of the cognitive-motor exercise games.
FIGURE 3
FIGURE 3Flow diagram of the exercise sessions.
FIGURE 4
FIGURE 4 Participatory design workflow.Prototype 1.0 was used in Part 2. Prototype 2.0 was used in Part 3, during the pilot in Part 4 Phase 1 and also discussed in the Part 5 Phase 1 interviews.Prototype 3.0 was used in Part 4 phase 2 and discussed in the Part 5 Phase 2 interviews.
to a training sequence (A-B-C, A-C-B, B-A-C, B-C-A, C-A-B, or C-B-A), with A denoting the King of the Bongos exercise, B denoting the Traffic Light exercise and C denoting the Simon Says exercise.
FIGURE 6
FIGURE 6 Average participant reaction times for each exercise.Results for Phase 1 are shown in blue (left) and for Phase 2 in orange (right).
FIGURE 7
FIGURE 7Average participant success rate for each exercise.Results for Phase 1 are shown in blue (left) and for Phase 2 in orange (right).
• "The interaction should be simpler, interactive and friendlier such as greeting [the users] or creating social dialogue [with them].If it remains only explanatory, I find it harder to stay in focus.(Emma, age 63)
FIGURE 8
FIGURE 8Difficulty level reached by participants in the two phases.The difficulty level in the Traffic Light exercise (left) shows the mode-the difficulty level at which the participants spent the most time (since the difficulty level of this exercise could both increase or decrease depending on user performance).The difficulty level in the Simon Says exercise (right) shows the maximal difficulty level that participants reached (since the difficulty level for this exercise only increased based on user performance, but did not decrease).
FIGURE 10 Participants' most liked exercises in the two phases.Results for Phase 1 are shown in blue (left) and for Phase 2 in pink (right).
FIGURE 11
FIGURE 11Participants' choice of an exercise set for the final session in both phases.After training with all three exercise sets, participants were asked which exercise they would like to do as the fourth and final exercise session.Results for Phase 1 are shown in orange (left) and for Phase 2 in red (right). | 12,236 | 2023-10-06T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Hedonic Pricing on the Fine Art Market
: In conditions of the stock market instability the art assets could be considered as an attractive investment. The fine art market is very heterogeneous which is featured by uniqueness of the goods, specific costs and risks, various peculiarities of functioning, different effects and, hence, needs special treatment. However, due to the diversity of the fine art market’s goods and the absence of the systematic information about the sales, researchers do not come to the same opinion about the merits of the art assets conducting studies on single segments of the market. We make an attempt to investigate attractiveness of the fine art market for investors. Extensive data was collected to obtain a complete pattern of the market analyzing it within different segments. We use the Heckman model in order to estimate the art asset return and find out the most influential factors of art price dynamics. Based on the estimates obtained we construct monthly art price index and compare it with S&P500 benchmark.
Introduction
Traditionally, the fine art market is supported by the interest from collectors, but the last 10 years' buyers pay more attention to the profit to be obtained from investing in the art. Nowadays investing in the fine art is an alternative to the classical instruments of investment, especially, when the stock market falls (like in 2008). Thus, there is a certain class of the art market investors and funds shapes. Over the last ten years not only the structure of the art market participants has been changed, we can also observe the noticeable changes in taste preferences which cause new tendencies on the market. First of all, it is necessary to highlight the increase of sales on the fine art market. The world auction's revenue has increased more than three times from $4.15 billion in 2005 to $15.2 billion in 2014. Such indicators as the number of million dollars lots and bought-in rate also indicate the positive trends on the market: in 2013 the number of million dollars sales achieved 1519 lots compared to 487 lots in 2005; the bought-in rate keeps the level of 30-35%. From the geographic point of view China has displaced the USA from the leader's position despite the fact that the USA along with the UK have been predominating on the global fine art market for more than 50 years. The largest art sector is the modern art: it takes more than a half of the market in terms of revenue generated by public sales. However, over the last ten years the share of the postwar art has grown up from 15% to 26% due to the growth of popularity and prices for the art works in this sector. The postwar and contemporary art sectors should be considered as the most speculative ones, the price volatility attributed to these segments is the highest on the market according to the Artprice index (www.artprice.com). Impressionists' works retire from the turnover steadily as well as old masters' works which attracted attention last time in 2009 when auctions had reduced the supply of the contemporary art due to the crisis. As for the form of art works, the major part of lots sold are paintings. Besides, a significant amount of drawings sales is observed. Prices for the drawings rise along with the growth of popularity of Chinese art. The leaders among auctions are Christie's and Sotheby's, but their shares have decreased in the last decade due to the entrance of Chinese auctions on the fine art market. According to the Artprice rating of artists based on the total revenue generated by public sales of each artist's work since 2009 the leaders on the market are modern artists: P. Picasso, M. Rothko, A. Giacometti, A. Modigliani, F. Bacon, W. de Kooning, F. Leger and others. Pablo Picasso unalterably takes the first position on the market podium.
Every year there are more and more Chinese among the most prosperous artists: Qi Baishi, Zhang Daqian, Xu Beihong, Fu Baoshi, Zao Wou-Ki, Li Keran and others.
Almost all auctions, which operate on the fine art market, are "English" auctions. In an English auction the bidding starts at the minimum bid and then participants raise their bids. When the bidding stops, an item is knocked down at the hammer price. According to the game theory buyers will benefit if they raise the bid until the bid announced by the previous bidder. If the hammer price on an item is less than its reserve price which is set by the seller, the item goes unsold. The percentage of unsold items is called bought-in rate. Auctioneers and sellers keep the reserve price secret, and literature in this field still does not explain exactly such strategy. Some studies disclose reasons for such behavior [1,2], and others [3,4] provide evidence that it is not an optimal strategy [5]. The reserve price of items is considered to be a little bit less than their low auction's estimation (about 70% of the low estimation). Among other researchers Ashenfelter and Graddy [6] proved this suggestion in 2011. The high and low estimations for each item are published in an auction's presales catalogs. It is worth mentioning that both the theory and empirical studies confirm the rightfulness of these estimations. The auction house receives commissions from both the buyer and the seller. The buyer's premium is 10-25% of the hammer price; it is one of the main instruments of competition between auction houses [7]. However, the buyer's premium could be much smaller for the institutional investors. The seller's commission varies from 5 to 10% of the sale price. This commission could be negotiable.
The fine art market would be characterized by certain effects. The unsold item is called burned, and some auction houses do not permit owners to sell such item immediately after the unsuccessful auction due to the belief that the failure of unsold items causes the price decrease. Beggs and Graddy [8] argue that an unsold artwork loses 1 3 of its final price at re-selling. The effect of "masterpiece" implies that, as a rule, dealers recommend to buy one "masterpiece" for $100,000 than 10 art works for $10,000. However, this effect is doubtful, for example, Pesando [5] did not find any evidence supporting such opinion. There is also a hypothesis about an indirect impact of the stock market on the art market via the welfare of their players -the effect of "welfare" [9]. According to the anchoring effect, past prices or auctions estimations of the item could influence the buyer's and seller's perceptions of the real value of an item that is reflected in the sale and reserve prices respectively [10].
Several reasons maintain interest in the art market: acknowledgment of the social status by buying a luxury item like a painting, aesthetic pleasure, and acquisition of the potential investment asset. The self-value of artworks, the ability to give an aesthetic pleasure to the buyer as well as high transaction costs for acquisition and storage (auction's commissions, transportation and storage costs, customs fee) along with the specific risks (theft, fire, forgery) are the factors which determine lower yield compared to securities. Some researchers hold this point of view revealing and explaining a lower return on the fine art market compared to the stock one [11,12]. On the other hand, another group of experts does not share this opinion and provide evidence of a relatively high art market return along with a moderate risk [13,14].
Obviously, empirical studies apply different methods of return assessing: some authors use direct price index construction [15], the second option is the hedonic method (HM) [11,13,16,17], someone prefers repeated sales method (RSM) [12,[18][19][20][21]. However, the differences between the results presented in the studies are caused by original data rather than by differences between methods applied: even if researchers use the same time interval and the same methods but make estimation on different samples, inferences will vary because the art market is extremely heterogeneous. Thus this paper is aimed at conducting the analysis of art assets' prices based on the most possible complete and comprehensive dataset on oil paintings as the representative fine art asset.
We collect information about oil paintings presented in auctions 2005-2015, since before 2005 the data provided is not complete. There are 536660 observations in the sample. The following information for each observation in the sample was downloaded: information about the author (name, nationality, date of birth) and the auction (date, city, and lot number), individual characteristics of oil paintings (name, height, width, signature, exhibitions, references in literature), hammer price, low and high auction's estimations. Along with mentioned characteristics we add other indicators such as sale in the capital, macroeconomic region, author's nationality, art sector and artist's rating according to Artprice, which, to the best of our knowledge, were not analyzed in the corresponding literature.
As an initial step toward understanding an overall situation on the art market we provide a detailed descriptive analysis identifying key trends on the market and demonstrating its structure. Further we estimate the hedonic art price index taking into account the problem of self-selection via Heckman selection model and compare it with S&P500 benchmark.
The main contribution of the paper is defining factors of oil paintings prices on the sample covering almost all public fine art auctions around the world. We embed self-selection bias correction in the estimation procedure by applying Heckman model for the price equation and add some new regressors.
The rest of the article is organized as follows: Section 2 describes the data and the methodology employed in the research, Section 3 contains empirical results, Section 4 discusses findings and concludes.
Data and Descriptive Analysis
Data for the study has been collected from Artsalesindex.artinfo using MATLAB. Prices and auction's estimations is adjusted for inflation based on CPI index.
In the descriptive analysis we focus on median prices rather than on average ones due to the fact that median prices are more informative in the case of the fine art market: from time to time paintings are sold for record prices, which biases the average prices. The descriptive analysis demonstrates the significant heterogeneity of the market, even considering only oil paintings: there is distinguished a small layer of 1-5% of "masterpieces" sold for millions dollars and other artworks the price of which is about only $10,000-50,000. This fact is illustrated in the box graph -- Figure 1, which demonstrates a lot of outside values situated higher than upper adjacent value, especially, in 2012 -year characterized by price records on the fine art market. A half of paintings, situated between higher and lower hinges, was sold for about $1000-60,000; median log price is closer to the 25th percentile. The box graph also reveals a tendency to increase of the price scattering over time reflected in the length growth of boxes and whiskers.
The heterogeneity of the art market is observed at the level of countries. According to the map (see Figure 2), the U.S.A. ($15,000 million) and the U.K ($5000 million) are leaders on the market; they accumulated the biggest revenue from the public auctions for the 11 years. Apart from the USA and the UK, France and Italy cross a threshold revenue $1000 million. Such observations do not comply with a market trend regarding an important role of China. However, as we consider only oil paintings, not typical art for the Chinese, their works are represented in the sample by a small number of lots. Figures 3 and 4 represent the bought-in rate and median/average price of paintings by the artist's nationality. The demand for Americans paintings seems to be high enough as the bought-in rate attributed to their artworks is relatively low, the moderate median price indicates a possible affordability of their paintings on the market, the highest average price reflects a big price range for the American art; apparently, relatively low bought-in rates along with high median prices of Arabs, Asian and Latin American paintings could reveal a certain fashion for such art; according to the indicators, we also assume, that the market of European art be over-saturated, and Russian paintings be overestimated. The structure of the fine art market has not changed too much over 11 years. The pie chart ( Figure 5) demonstrates the structure of our sample by the art sectors. The predominance of the modern art complies with the key art trend described above. According to the histograms represented on Figure 6, younger art is priced lower. We would like to point out that the financial crisis has affected the art market: the recession lasted from 2008 to 2012; especially, the contemporary art has been affected by the downturn as the most speculative art sector. The price dynamics supports a tendency to the higher volatility on the market. The evidence is provided by Figure A1 in the Appendix.
Empirical Model
In this paper we exploit the hedonic approach by means of the Heckman model with sample selection. Despite having a big sample, most observations represent non-repeated sales. Thus, we have chosen the HM instead of RSM in order to avoid discarding the major part of observations. Moreover, the both methodologies bring to the similar results. Finally, Bocart and Hafner [22] in their recent study assume that the hedonic regression framework is an appropriate tool for analyzing heterogeneous goods.
An assumption under the hedonic methodology implies that an individual optimizes the consumption of the product's characteristics choosing a product with an optimal set of parameters subject to a budget constrain. It means that his choice depends on the income and implicit "prices" of the characteristics, which are estimations of differences between item's qualities in value terms.
We use the Heckman model, sometimes called Tobit I model, in order to estimate the return because it allows getting rid of biases caused by the fact that in the sample there are some unsold paintings whose hammer prices did not reach their reserve ones. Therefore, we should also consider the probability of sale. Including unsold paintings in the sample is necessary because it allows verifying an importance of the sample selection accounting and getting a more reliable hedonic index. The model to be estimated is presented in (1).
The As we have already mentioned, the art market is very heterogeneous at the price level as well as at the sector level. Hence, we estimated the model for each art sector separately.
Results
We estimate the Heckman model for each sector on the fine art market. The results for (1a) are shown in Tables 1 and 2. Table 2. Continued from the previous page. The results for oil paintings hedonic art price index model, (1a). The dependent variable is the logarithm of the painting price. We estimate separate equations for each of five art sectors, namely Old masters, Impressionism, Modern art, Postwar art and Contemporary art. Coefficient t-statistics are in parenthesis. Level of significance: * -5%, ** -1%, *** -0.1%. Almost all coefficients in the equation are significant and have expected signs except for the coefficient "author's signature", whose negative sign does not comply with theoretical promises. It could be caused by the situation when a painting without any signature is a rare or unusual (atypical) work of an artist which is evaluated at a higher price. We observe high difference between the values of coefficients between sectors. The highest contribution to the explanation of price logarithm is provided by the author's popularity. If an author falls in the top of 100 best artists according to the Artprice review, the price for his artworks will be almost three times higher than for other paintings compared with the author's who is not presented in the rating. The same thing can be seen with the top 500. Moreover, the younger the art sector is the higher coefficient values are attributed to these two variables. Also, it should be mentioned that the price increases as a consequence of paintings publications in the literature, exhibitions, sales in the second and fourth quarters in comparison with the first quarter and sales in a capital. Contemporary paintings are sold better among American artists in comparison with Europeans; on the other hand, the price of old American masters artworks is lower. Russian nationality of artists contributes to the higher sale price except for the Contemporary art. Further, we estimated an extended specification of the model in order to obtain a monthly hedonic index. In particular, we added dummy variables for each month of the period 2007-2015 (a basic variable is January of 2007). We consider the results obtained by estimating the model on the full sample and on a restricted one, which contains only oil paintings under $10,000. The variance of the index is too high even if we impose restrictions on paintings under $10,000. As an example, we present the art price index dynamics in comparison with S&P500 index for the largest sector -the modern art ( Figure 7); the indexes constructed for other sectors demonstrate the similar behavior. It should be noted that we use different sets of explaining variables for different art sectors as robustness check and due to the low variation of some regressors (dummy-variables for the author's nationality and the region of sales) with respect to the relatively small number of observations in some sectors (old masters, impressionism and contemporary art) we suppose that coefficients variances associated with such variables could be overestimated.
Discussion and Conclusions
In this paper we focus on oil paintings as the most representative product on the art market, the period of observation is 2005-2015. The data analysis confirms the high level of heterogeneity even upon consideration of oil paintings only. The analysis indicates that the non-uniformity of the market is observed at the countries level and across artists' nationalities; parameters considered in the study demonstrate a different behavior at the art sectors level; we could also provide evidence of the 2008 crisis impact on the art market.
The comparison with S&P500 shows that the US market index is outperformed by the art price index during the periods of negative returns. Consequently, oil paintings can serve as a "safe haven" asset, which is in line with [23], who show that between 2000 and 2015 art assets do not underperform the market.
For art price index, estimated on broad sample, the average return is about 1% above inflation, which substantially lower, than for some fast growing fine art markets such as Russia, India and China [24]. For more details on art price returns in different countries see [25].
The negative impact of 2008 crises on oil painting prices is supported by the estimation results. During four years after the crisis the coefficients of the corresponding variables are significant and negative and the contemporary art market incurred the highest losses from the crisis.
The study reveals that the sample selection model allows understanding factors that drives art prices, taking into account the bias caused by the presence of unsold paintings. There are at least four factors that have the highest influence on oil paintings prices and, as a result are most important for the investors -the author's popularity, the novelty of the art sector, mentioning the painting in literature and taking part in exhibitions.
Besides, the works of Russian artists get higher sale price for all sectors except Contemporary art, in opposite to Americans, who have negative price increment in Old Masters sector. The other nationalities have significant and positive coefficients.
The findings of the paper also include the quadratic relation between the size of the paintings and its price, meaning that there is some "optimal" size of the canvas, for which the highest price is given.
To sum up, we investigate factors that influence the pricing of oil paintings on the broad worldwide sample. The results of this analysis could be applied by buyers, as well as sellers of the art assets, while making a decision on the transaction.
The study can be improved by addressing the possible issues of endogeneity, adding more characteristics, describing both the artist and the paintings and examining the long-run dynamics and cointegration of fine art market sectors similarly to [26]. It'd be interesting to continue the work by adding a survival model in order to see how the time of sale affects the price. | 4,730.6 | 2020-05-04T00:00:00.000 | [
"Economics"
] |
Coupled-ring-resonator-based silicon modulator for enhanced performance
A compact silicon coupled-ring modulator structure is proposed. Two rings are coupled to each other, and only one of these rings is actively driven and over-coupled to a waveguide, which enables high modulation speed. The resultant notch filter profile is steeper than that of the single ring and has exhibited a smaller resonance shift and lower driving electrical power. Simulations show that: (i) potentially 60-Gb/s non-return-to-zero (NRZ) data modulation with over 20-dB extinction ratio can be achieved by shifting the active ring by a 20 GHz resonance shift, (ii) the frequency chirp of the modulated signals can be adjusted by varying the coupling coefficient between the two rings, and (iii) dispersion tolerance at 0.5-dB power penalty is extended from 18 to 85 ps/nm, for a 40-Gb/s NRZ signal. ©2008 Optical Society of America OCIS codes: (060.0060) Fiber optics and optical communications; (130.0130) Integrated optics; (200.4650) Optical interconnects; (230.4110) Modulators; (999.9999) Silicon photonics.
Introduction
Electro-optic modulators are key elements of any optical communication system, and there has been much interest in using ring resonators in electro-optic modulators.When compared to Mach-Zehnder modulators, ring resonators tend to be smaller in size and have the potential for lower power consumption and array integration [1,2].This topic has taken on new excitement due to the potential for using compact rings in silicon-based photonic circuits [3].Therefore, any advance in the basic functionality of a ring-based silicon modulator would be of benefit, and the key characteristics tend to be: (i) operating bandwidth, (ii) driving power, (iii) extinction ratio, and (iv) induced frequency chirp [4].
There have been several demonstrations of silicon ring resonator based modulators [5,6].The typical design is a waveguide with a single-ring resonator coupled to it, in which the driving voltage applied to the ring shifts the resonance wavelength and produces a modulation of the pass-through optical field at a given wavelength.A laudable goal would be to develop novel structures to enhance the bandwidth, efficiency, and flexibility in electro-optic modulation based on silicon.We note that novel electrical structures and driving schemes have been reported to increase the modulator speed [6], and the fundamental limitation to the modulation speed comes mainly from the optical domain.For a single ring modulator, the photon lifetime limits the achievable modulation bandwidth to the resonance linewidth when the resonance shift is typically comparable to the resonance linewidth.Multiple-ring structures can also be used as modulator [7], which opens the possibility of further increasing modulation speed.Coupled-ring-resonator structures composed of a sequence of microresonators coupled to an optical waveguide have been reported [8,9].Previously, these structures are used for slow light or nonlinear optics.
In this paper, we propose and analyze a coupled-ring-resonator-based modulator, in which only one ring resonator is driven, which is heavily over-coupled to the waveguide to increase operation speed.A passive ring is coupled with it to obtain a high extinction ratio.It is shown that up to 60-Gb/s NRZ modulation with over 20-dB extinction ratio can be achieved by only 20-GHz resonance shift of the active ring.The frequency chirp of the modulated signals can be adjusted by varying the coupling coefficient between the two rings.The coupled-ring modulator enhances dispersion tolerance of 40 Gb/s NRZ from 18 to 85 ps/nm at 0.5-dB power penalty in data transmission over single mode fiber (SMF) without dispersion compensation, as compared to the signals generated by a single-ring modulator.
Principle
Typically, as shown in Fig. 1(a), in a single ring modulator, a continuous wave (CW) laser source is fixed at the ring resonance wavelength.When the drive voltage is turned on/off, the resonance is shifted back and forth due to the carrier density change in the ring waveguide, and thus the CW laser light is modulated [1,2].In the optical domain, the modulation speed depends on how fast the light can be coupled in and out of the cavity, which is related to the photon lifetime and the resonance linewidth of the cavity.Hence increasing the coupling helps achieve a shorter photon lifetime and larger bandwidth.
However, simply increasing the coupling will decrease the cavity Q and enlarge the resonator linewidth, which may either cause a lower extinction ratio in the modulated signals or require a larger resonance shift [1].Inevitably, there is a tradeoff between the bandwidth and resonance shift for single ring modulator when keeping a certain extinction ratio.
Instead, as illustrated in Fig. 1(b), we describe a coupled-ring-based modulator, which utilizes two coupled rings.We designate the ring immediately adjacent to the waveguide as the "inner" ring and the other one as the "outer" ring.The inner ring is heavily over-coupled (i.e., coupling » loss) to the bus waveguide, and only this ring is actively driven to produce
Electrode
Single-ring modulation data modulation.Given a fixed cavity Q, and considering that the loss is less than the coupling, the light energy in the inner ring can be accumulated much faster compared to the critical coupling case (coupling = loss) that is desired in the single-ring modulator.Since the output signal in the waveguide is determined by the interference of the input CW light with the light coupled from only the inner ring, the over-coupled ring can potentially respond to a higherspeed electrical signal due to the faster energy accumulation speed.Moreover, compared to the single-ring modulator, the proposed coupled-ring modulator features several potential advantages: (1) as shown in Fig. 1 The proposed coupled-ring modulator scheme: only the ring adjacent to waveguide is driven, which requires smaller resonance shift and enables higher modulation speed and higher extinction ratio.
Referring to [10], to model the coupled-ring modulator, we utilized an ideal 60 Gb/s NRZ drive voltage sent through a five-pole Bessel electrical filter as the electrical input signal.The variation in carrier density is simulated as a charging process following the applied voltage.According to [11], this is believed to be a good fit to real behavior in MOS capacitors.The carrier transit time, defined as the duration for carrier density to increase from 10% to 90% of its peak value when a voltage step is applied, is considered to be 10 ps [10,12].The continuous wave from laser source is then modulated.The relationship between the resonance frequency and the index is given by the resonance condition ω = mc/n eff R, where ω is the resonance frequency, m an integar, c the light speeding vacuum, n eff the effective index and R the ring radius.Change in voltage from 0 to 5 V causes a resonance peak shift of 0.16 nm towards shorter wavelength.To obtain the modulated optical signals, a set of differential equations are solved.The following derivations are essentially based on Refs.[3,13].The time rate equations of the energy amplitude change in the coupled-ring resonator with the inner ring coupled to a single waveguide are: where a 1 and a 2 are the energy amplitudes and ω 1 and ω 2 are the resonance angular frequencies in the inner ring and outer ring, respectively; E in and E out are the incident wave field and transmitted wave field; 1/τ e is the amplitude decay rate due to the power coupling into the waveguide; 1/τ o1 and 1/τ o2 are amplitude decay rates due to the intrinsic loss in the inner ring and outer ring; μ 1 = κ 1 (v g1 /2πR 1 ) 1/2 and μ 2 = κ 2 (v g1 v g2 /2πR 1 2πR 2 ) 1/2 ; κ 1 is the power coupling between the ring and the bus waveguide; κ 2 is the mutual power coupling between the rings; R 1 and R 2 are the inner ring radius and outer ring radius respectively.v g1 and v g2 are the group velocities in the inner ring and outer ring; and ω 0 is the carrier wave frequency.
Since the inner ring is active while the outer ring is passive, ω 1 is modulated around the carrier frequency ω 0 , whereas ω 2 is actually fixed at ω 0 .
The operation principle of the coupled-ring modulator is as follows.First we focus on the inner ring, which is over-coupled to the waveguide.Assuming μ 2 = 0 in equations ( 1-a, b, c) and ref, the energy in the inner ring resonator evolves as: Since the cavity Q is fixed, the sum of 1/τ e and 1/τ 01 is also fixed.As shown in Eq. ( 2) for an over-coupled ring resonator, the ring energy |a 1 | 2 can increase faster compared to the critically coupled scenario with same cavity Q, due to a higher μ 1 , thereby resulting in a potentially higher response speed.Over-coupling, however, will result in the energy amplitude inside the ring being stronger than that in the critical-coupling condition, which produces a high signal '0' level and a low extinction ratio.
In order to remove extra energy in the inner ring, the outer ring is needed.Given μ 2 ≠0 and using equations (1-a) and (1-b), the time rate change of the energy in the rings evolves as: As compared to equation ( 2), the mutual energy coupling (2jμ 2 Im(a 1 a 2 *)) from the inner ring to the outer ring in Eq.(3-a) can decrease the inner-ring energy |a 1 | 2 , resulting in a low signal '0' level and a high extinction ratio.Adding an outer ring produces mutual coupling loss in the inner ring while it won't decrease the modulation bandwidth significantly: the mutual coupling loss (2jμ 2 Im(a 1 a 2 *)) in inner ring is proportional to the energy amplitudes in both inner and outer ring (a 1 and a 2 ); according to Eq. (3-b), the accumulation rate of a 2 is proportional to a 1 , so the substantial growth of a 2 , as well as the mutual coupling loss, only happens as a 1 accumulates, at which time the signal '0' level is already generated by interference of the input CW light with the light coupled from the inner ring.
As shown in Fig. 2, the proposed modulator achieves both high response speed and high extinction ratio, exhibiting the same modulation performance with only 1/3 resonance shift, while the over-coupled single ring modulator produces heavily distorted signals.
Results
In Fig. 3, the 3-dB bandwidth and extinction ratio of the modulated signals are examined.The single ring modulator is compared to the coupled-ring modulator, where the power coupling coefficient κ 1 between the ring and waveguide (WG) is varied.For the single ring modulator, the cavity Q of the ring cavity is maintained at 9500, corresponding to a 20-GHz resonance linewidth, and the ring radius is 2.7 μm.These parameters can be feasible according to the state-of-the-art fabrication [14].As shown in Fig. 3(a) and with a 20 GHz resonance shift, the 3-dB bandwidth of the single ring resonator modulator increases with the coupling coefficient κ 1 .Critical coupling occurs around κ 1 = 0.095.As κ 1 > 0.1, the modulation bandwidth has a trade-off with the extinction ratio and is limited to 30 GHz with extinction ratio >10 dB.
For the proposed structure, we would like to utilize an over-coupled inner ring with which the energy can be accumulated very fast at the resonance.Hence the probe wavelength is set equal to the inner ring resonance and the ring-ring coupling is set small enough to make the profile as a deep notch profile.The inner ring has a cavity Q-factor of 9500 for a fair comparison and its round-trip loss coefficient is 0.9991.The coupling coefficient κ 2 is set to be 0.0094, and the round-trip loss coefficient of the outer ring is 0.9787.The two rings have the same radii of 2.7 μm.In Fig. 3(b), the critical coupling occurs at κ 1 = 0.012.Given a resonance shift of 20 GHz, the simulated modulation bandwidth up to 50 GHz is observed with 12-dB extinction ratio.The coupling coefficient κ 2 between the two rings plays a critical role in optimizing the performance of the coupled-ring modulator.One can consider the coupled-ring structure as a single compound resonator and κ 2 can be varied to adjust the energy distribution between the inner and outer rings.Noticing the loss in outer ring is larger than that in inner ring, people can modify overall loss coefficient of the compound resonator by changing κ 2 , which is different from a single ring resonator in which the loss is determined by the fabrication technique [3].As κ 2 increases, the compound resonator becomes lossier and is switched from the over-coupled condition to the under-coupled one.Critical coupling is observed near κ 2 = 0.013, in which the extinction ratio of the modulated signal is more than 20 dB (Fig. 4(a)).As a result, the frequency chirp is switched across the critical point (Fig. 4(b)).However, we note that large instantaneous frequency chirp occurs at the low-power region of pulse waveforms.Thus it is important to consider the chirp together with the instantaneous power.We define "effective chirp" as f(t)⋅p(t), where f(t) and p(t) represent the instantaneous frequency and power of the modulated pulses, respectively.With the input power of 0 dBm, we choose the peak value of the calculated effective chirp to evaluate the chirp effect of the modulated signals, as plotted in Fig. 4(b).As κ 2 increases from 0.009 to 0.016, the peak value of effective chirp changes from -4.45 to -1.0 mW⋅GHz.
The system performance of the coupled-ring modulator is simulated and compared to a single ring modulator.In the back-to-back case, a 60-Gb/s NRZ signal is modulated with a single ring modulator (both linewidth and resonance shift = 60 GHz) and a coupled-ring modulator (both inner ring linewidth and resonance shift = 20 GHz), respectively.As shown in Fig. 5
Conclusion
A silicon microring modulator with coupled-ring-resonator structure is proposed.A 60-Gb/s NRZ signal has been obtained from a 20 GHz resonance shift of the inner ring.This design is particular meaningful for the high speed signal modulation based on microring resonators.
(b), the coupled-ring structure has a deeper notch profile, and thus enables relatively high extinction ratio, (2) the transmission profile of the coupledring structure becomes steeper, which may allow a smaller resonance shift and lower driving electrical power, and (3) increased design degrees-of-freedom provide us with better design flexibility to optimize the performance of the modulated signals in different communication scenarios, such as when chirp adjustment is desired.(a)(b) Fig. 1.(a) Single ring modulator scheme.(b)
Fig. 2 .
Fig. 2. Comparison among the three examples of the modulation of a 60-Gb/s NRZ signal shows that the proposed modulator requires 1/3 resonance shift (R.S.) to achieve the modulation with negligible distortion compared to the critically coupled sing ring modulator.All figures are plotted in the same scale with the same output power.
Fig. 3 .
Modulation bandwidth and extinction ratio versus the ring-to-waveguide coupling coefficient κ 1 for the (a) single ring and (b) coupled ring modulators.
Fig. 4 .
Fig. 4. (a) Extinction ratio versus the coupling coefficient κ 2 between the inner and outer rings.Critical coupling occurs around κ 2 = 0.013 and the extinction ratio tends to be infinite.(b) Frequency chirp is switched from negative to positive as κ 2 increases.As stated above, the modulated signal exhibits the negative effective chirp, which can be controlled by κ 2 .As an example, a 40-Gb/s NRZ is modulated by the coupled-ring modulators with different κ 2 , compared to a single ring modulator.Fig.5(b) shows the simulated power penalty of the 40-Gb/s signal in the SMF transmision without dispersion compensation.We note that less than 0.5 dB power penalty due to 85 ps /nm chromatic dispersion is achieved for the 40-Gb/s NRZ signal with the coupled ring modulator.This is consistent with the trend shown in Fig.4(b).In contrast, a single-ring modulator shows power penalty as high as 3.5 dB under 35 ps/nm chromatic dispersion and exhibits less flexibility to modify the properties of the modulated signal due to the request to keep critically coupling.Dispersion tolerance at 0.5-dB power penalty is extended from 18 to 85 ps/nm, as shown in Fig.5(b).
Fig. 5 .
(a) Back-to-back BER curves of the 60-Gb/s NRZ signals modulated with the single ring (linewidth = 60 GHz, resonance shift = 60 GHz and 20 GHz) and coupled-ring modulators (linewidth = 20 GHz, resonance = 20 GHz).(b) A 40-Gb/s NRZ signal under different chromatic dispersion values, where the coupled-ring modulators with different effective chirps are achieved by varying the ring-to-ring coupling coefficient κ 2 . | 3,789.8 | 2008-08-18T00:00:00.000 | [
"Engineering",
"Physics"
] |
Incorporating topological representation in 3D City Models
: 3D city models are being extensively used in applications such as evacuation scenarios and energy consumption estimation. The main standard for 3D city models is the CityGML data model which can be encoded through the CityJSON data format. CityGML and CityJSON use polygonal modelling in order to represent geometries. True topological data structures have proven to be more computationally efficient for geometric analysis compared to polygonal modelling. In a previous study, we have introduced a method to topologically reconstruct CityGML models while maintaining the semantic information of the dataset, based solely on the combinatorial map (C-Map) data structure. As a result of the limitations of C-Map’s semantic representation mechanism, the resulting datasets could suffer either from semantic information loss or the redundant repetition of them. In this article, we propose a solution for a more efficient representation of geometry, topology and semantics by incorporating the C-Map data structure into the CityGML data model and implementing a CityJSON extension to encode the C-Map data. In addition, we provide an algorithm for the topological reconstruction of CityJSON datasets to append them according to this extension. Finally, we apply our methodology to three open datasets in order to validate our approach when applied to real-world data. Our results show that the proposed CityJSON extension can represent all geometric information of a city model in a lossless way, providing additional topological information for the objects of the model.
Introduction
3D city models have been increasingly adopted in modern analysis of urban spaces, such as the simulation of evacuation scenarios [1] and optimisation of energy consumption for city districts [2,3]. Their key benefit is that they can describe complex 3D geometries of city objects, such as buildings, vegetation and roads; and their semantic information, such as their purpose of use and year of construction.
CityGML is the most commonly used data model for the representation of 3D city models [4], which can be encoded in JSON through the CityJSON data format. The data model incorporates the GML representation, which describes the geometric shapes by their boundaries through a method that is referred to as "polygonal modelling". While polygonal modelling is generally considered a robust representation of 2D data, it has been proven inefficient when representing 3D objects. This reflects to the limited number of 3D processing algorithms that can be easily applied to polygonal modelling [5].
Topological data structures have been introduced in GIS as an alternative to polygonal modelling. Their main characteristic is that they explicitly describe the adjacency and incidence relationships between geometric objects. Those relationships can improve the performance of geometric processing. For example, Maria et al. [6] exploited the topological properties of geometries in order to improve the efficiency of ray tracing in architectural models. Furthermore, topological data structures have the ability to scale to higher dimensions without adding unnecessary complexity [7]. Therefore, typical GIS operations can be described as dimension-agnostic algorithms which can be applied in an arbitrary number of dimensions. For example, Arroyo Ohori et al. [8] proposed a solution for the extrusion of objects of any number of dimension.
For this reason, we investigated the use of ordered topological structures and, more specifically, combinatorial maps (C-Maps) as an alternative to GML for the representation of geometric information in 3D city models. C-Maps combine the powerful algebra of geometric simplicial complexes with the ease of construction of polygonal modelling [7]. Regarding the practical aspect, they are implemented in a software package as part of CGAL (The Computational Geometry Algorithms Library (http: //www.cgal.org)) and they are efficient with respect to memory usage [9]. While C-Maps originally store only topological relationships between objects, they can be easily enhanced with the association of coordinates to vertices, which results in a linear cell complex (LCC) that incorporate both geometric and topological information.
LCCs based on C-Maps have been used before in 3D city models. Diakité et al. [10] proposed a methodology on the topological reconstruction of existing buildings that are represented through polygonal modelling. They then used the topological information to simplify the building's geometry. This approach is based on the extraction of a soup of triangles from the original geometry which are later stitched together according to their common edges in order to identify the topological relationships. During this intermediate step, the semantic information of the original model, such as hierarchical relationships, are lost as the soup does not retain the information of the original model. Diakité et al. [11] further refined this process and applied it to BIM and GIS models, in order to exploit the topological information to identify specific features of buildings. Although this application includes the reconstruction of CityGML models, it results in a semantic-free model where the original city objects' subdivision is lost.
Previously, we introduced a methodology for the topological reconstruction of CityGML models to LCCs based on C-Maps with preservation of semantics [12]. Because this methodology relies solely on the C-Maps data structure for the representation of all information of the 3D city model, the resulting model suffers from either occasional loss of the semantic subdivision of city object, or a redundancy of information. For example, in that article, we topologically reconstructed the 3D city model of Agniesebuurt, a neighbourhood of Rotterdam, which was missing intermediate walls between adjacent buildings. As a consequence of the missing walls, multiple individual buildings were merged under the same volumes in the resulting C-Map. This causes the loss of semantic information of some buildings during the reconstruction as only one city object's information could be attached in the resulting volume.
In this paper, we propose an improved methodology for the topological representation of CityGML models, in order to avoid the limitations of semantics representation in the C-Maps data structure and the limitations of topological representations in CityGML. To achieve that, we integrate the original CityGML data model with C-Maps in order to combine the semantic-representation capabilities of the first with the benefits of a topological data structure. This topologically-enhanced data model is implemented in CityJSON through an extension. We also develop an algorithm in order to transform existing CityJSON datasets. We applied our algorithm to several open datasets to assess the robustness of our method and evaluate the ability of the proposed data model to represent the peculiarities of various datasets.
The CityGML Data Model
CityGML is a data model and an XML-based format that has been standardised through the Open Geospatial Consortium (OGC) in order to store and exchange 3D city models [4]. It defines an object-oriented approach for the representation of city objects, utilising techniques such as polymorphism in order to provide enough flexibility.
In CityGML, a city model contains a number of city objects of different types, all of which inherit from the basic abstract class CityObject. Different types of objects can be represented through derived classes: (a) composite objects, such as CityObject Group; (b) specialised abstract classes, such as AbstractBuilding; or (c) actual city objects, such as CityFurniture and LandUse (Figure 1). Given that a CityGML dataset has a tree structure, the objects can be either listed as immediate child nodes in the model or represented in a deeper layout by grouping objects using the CityObjectGroup class. CityGML is a schema that specialises the geographic markup language (GML) [4], therefore it follows GML's geometric representation, which is based on ISO19107, the international standard that defines the spatial characteristics of geometric features [13]. Every CityObject contains one or more Geometry objects described as GML objects. The Geometry object can be extended through composition, therefore a geometry can be a primitive or a composite object of multiple geometries.
According to the CityGML data model, city object semantics are represented through two mechanisms: object types and attributes. The type of a CityObject is derived from the class used in order to represent it. Then, additional information for every CityObject can be stored in the model through attributes which are described by the CityGML specification. A user can enhance the data model with additional attributes related to their domain requirements, either by using the GenericAttribute class or by developing an application domain extension (ADE).
In addition to city object information, CityGML provides a mechanism to further define semantics of individual surfaces that compose the geometry of an object. This is achieved through the SemanticSurface, which can semantically describe polygons that bound a geometry. For instance, an individual surface of a building can be defined as a RoofSurface to denote that its part of the building's roof. Furthermore, semantic surfaces can be enhanced with attributes in order to attach information to them, e.g., a WallSurface can have attributes describing its colour.
The CityJSON Data Format
CityJSON is a data format which uses the JavaScript Object Notation (JSON) encoding in order to implement a subset of the CityGML data model. Its goal is to serve as an alternative to the CityGML data format, which is verbose and complex due to its GML (and, thus, XML) nature. For example, there are at least 26 different ways to encode a simple four-point square in GML (https: //erouault.blogspot.com/2014/04/gml-madness.html).
CityJSON uses a simpler structure that allows for less ambiguity and verboseness. First, it uses the JSON encoding, which is easier to parse and write. This is due to its representation, which can be mapped directly to the data structures that are supported by most modern programming languages: key-value pairs (known as maps or dictionaries) and arrays. Second, CityJSON promotes a "flat" list of city objects, while hierarchy can be implied through internal attributes. For example, the parents and children attributes can be used to associate references between city objects as follows: Similar to ADEs for the CityGML data format, CityJSON also provides an extension mechanism for defining domain-specific city objects and attributes. Through CityJSON Extensions, one can introduce new type of city objects or append existing ones with attributes related to the subject of the extension.
Topology in 3D City Models
As mentioned in Section 2.1, CityGML specialises GML, which uses the geometric representation of ISO19107. While GML provides a topology package, CityGML does not deploy it as GML topology is considered "very complex and elaborate" [4]. Instead, it utilises the XML mechanism of XLinks in order to re-use common geometric primitives between objects, which can be seen as an implicit representation of adjacency information in the model [14].
Li et al. [15] introduced a topological representation of 3D city models through a proposed CityGML ADE. Their solution offers a way to store topological relationships (such as touch, overlap and disjoint) as decorators of CityGML geometries.
In both approaches, topology is considered as an additional layer of information on top of the geometric representation. Therefore, they do not intend to use topological data structure to represent the objects' shapes.
Linear Cell Complexes and Combinatorial Maps
A linear cell complex is a separable Euclidean space E n of non-intersecting cells, where an i-cell is homeomorphic to an i-dimensional ball and every i-cell is linear (i.e., collinear, coplanar, etc.). For example, a three-dimensional linear cell complex denotes zero-cells which are vertices, one-cells which are edges, two-cells which are facets and three-cells which are volumes. A linear cell complex is the special case of a CW complex, whose attaching maps are homeomorphisms and whose cells are polyhedra [16]. Cell complexes are used in applications such as computer graphics, Computer Aided Design (CAD) and geographic in order to discretise 3D geometric objects [17].
A combinatorial map (C-Map) is a data structure that represents an orientable topological partition of n-dimensional space [18], and can be used to describe the combinatorial part of a linear cell complex. C-Maps are similar to generalised maps (G-Maps) [19], although the first is capable of representing only orientable cells, while the latter can represent non-orientable cells as well.
C-Maps represent cells in space through dart elements. Darts are similar to half-edges (from the half-edge data structure) for an arbitrary amount of dimensions: every part of a directed edge that belongs to an incident i-cell (0 < i ≤ n) is a dart. For instance, when two two-cells (facets) have a common edge, the edge will be described by two darts (one for each facet). By definition, a dart can only belong to one i-cell and can be considered as the equivalent of two darts of a G-map associated through α 0 .
Darts are connected through β i links (where 0 ≤ i ≤ n) so that every dart contains one β i ∀i ∈ {0, . . . , n} ( Figure 2). A β i is a link to the next dart in the i-cell. For example, in a 4D C-Map, a β 3 of the dart d links to the dart that belongs to the same edge (one-cell) of the same facet (two-cell) of the same polychoron (4-cell) as d 1 , but is part of the adjacent volume (three-cell). A null dart (denoted as ∅) is introduced to the C-Map in order to describe the β i of a dart that is not linked to another dart.
Example of a two-dimensional C-Map describing two polygons [20]. The graph is composed of seven zero-cells (vertices), eight one-cells (edges) and two two-cells (facets). These cells are described by darts (denoted as arrows) and their β i s (denoted as dashed arrows). Every edge of a facet is described by a dart and β i 's are used to describe the links between them: β 1 is the dart of the "next" edge of the same facet and β 2 is the dart of the "next" facet of the same edge. β 0 is a special link to the dart of the "previous" edge of the same face. In this example, only one dart's β i s are shown.
To modify C-Maps, we define the sewing operation, according to which pairs of corresponding darts of two i-cells are linked in one dimension. A i-sew operation associates together two i-cells along their common incident (i − 1)-cell. This means that the β i 's of every pair of darts along the two (i − 1)-cells has to be linked.
Cells in a C-Map can be associated to information through a mechanism of attributes. A dart holds a set of attributes, one for every dimension of the C-Map which is called the i-attribute of the dart. For example, to set a property of a facet (e.g., colour), one can set this colour value to the two-attribute of every dart of this facet (two-cell).
As mentioned above, C-Maps can describe the combinatorial part of a linear cell complex (LCC). A LCC can be fully described by associating the vertices of the C-Map with points of a n-dimensional geometric space (as coordinates) and assuming that all cells of the C-Map are linear. Then, this LCC contains both geometric and topological information for the space.
The C-Map and LCC data structures have been implemented in CGAL as software packages ( Figure 3) [9]. Therefore, it is possible to implement software in the C++ programming language which incorporates them.
Topological Representation of 3D City Models
To represent the topological information of city objects in a CityJSON file, we followed two steps: first, we introduced the LCC entities into the CityGML data model; and, second, we developed a CityJSON extension that provides the necessary encoding instruction in order to store those information in a CityJSON file. We developed the extension definition according to the respective CityJSON specification (https://www.cityjson.org/specs/1.0.0/#extensions). The CityJSON extension definition is available as open source in GitHub (https://github.com/tudelft3d/cityjson-lcc-extension).
Data Model
First, it must be possible to store darts in the city model. Second, every dart has to be linked with: (a) the point that defines the location of the dart's zero-cell; (b) the city object that is associated to the dart's two-cell; (c) the semantic information associated with the dart's two-cell; and (d) its β i darts. It must be noted that β 0 is not essential for the storage of the map, as it can be implied by the β 1 's of the structure. Therefore, for a three-dimensional LCC only β 1 , β 2 , and β 3 are required to be stored.
We decided to associate surfaces (two-cells) with the city objects' semantic information, which might seem a counter-intuitive solution. Initially, volumes (three-cells) might seem as a more suitable match for association with city objects. After all, a city object is normally composed of volumes. Unfortunately, as we proved in [12], that is not always the case. In some city models, there can be multiple city objects that topologically belong to one volume.
While in [12] we proposed a way of forcing three-cells to be divided into multiple volumes, when their two-cells belong to different city objects, the final result is not correct from a topological perspective. The key benefit of using an LCC data structure is to store the topological relationships of a city model's geometry. Consequently, such a solution would largely undermine the benefits of using a topological representation in the first place. Therefore, we decided to associate city objects' semantic information with two-cells in order to be able to maintain the binding of information in cases where one volume is associated with multiple city objects. This way, we ensure a topological representation that is consistent with the geometry and retain a complete association of semantics with the LCC.
We designed a data model that represents the LCC information through the Dart class ( Figure 4). The class contains the necessary information as attributes: (a) the vertexPoint attribute points to the Vertex object that stores the coordinates of the zero-cell; (b) the parentCityObject attribute points to the CityObject associated with the dart's two-cell; (c) the semanticSurface attribute links the dart's two-cell with the respective SemanticSurface (in case the original polygon has semantic information attached); and (d) the beta attribute is a three-element array of Dart objects, representing β 1 , β 2 and β 3 . While the Dart class and its attributes can represent a complete linear cell complex, they only preserve one-way links from the C-Map to the city model. Nevertheless, it is equally important to be able to identify the three-cells that compose a city object without having to iterate through the whole linear cell complex. This is achieved by introducing the lccVolumes attributes in the CityObject class, which contains links to the three-cells related to the city object. As it is not possible to directly link to the three-cells, this list contains one of the volume's darts.
CityJSON Extension
We also developed a CityJSON extension in order to implement the data model as described in Section 3.1. This implementation follows the original principles of the CityJSON data format, which aims to be easy-to-use and compact. For this reason, we defined two optimisations in the specification of the extension.
First, we reuse the "vertices" list as described in the CityJSON specification. This fits perfectly with the requirement of associating points to the zero-cells of the C-Map in order to achieve the linear geometric embedding. Therefore, it is sufficient for the completeness of the final dataset to store the dart information (their betas and parent object associations) and link them to the existing list of "vertices", instead of introducing a new list.
Second, although in the data model a dart is considered as an object with four attributes ("betas", "parentCityObject", "semanticSurface" and "vertex"), such a structure would produce a verbose JSON encoding. That is due to the fact that, for every dart in the dataset, the same attribute names would have to be repeated as keywords. Given the large amount of darts required in order to represent complicated cell complexes such 3D city models, this would result in a burst of the resulting file's size. Instead, we decided to store the four attributes as lists with the same length, equal to the number of darts. This way we avoid the repetition of keywords as they only appear once in the file, thus minimising its size.
Darts Representation
To store the darts of the LCC, we added the new "+darts" root property in the main "CityJSON" object. It contains four lists containing the values of the respective attributes of the LCCs darts: "betas", "parentCityObjects", "semanticSurfaces" and "vertices". The "+darts" object has also the "count" property which states the number of darts in the LCC. These lists are indexed, so the nth element of every list corresponds to the respective attribute of the nth dart of the LCC (the lists are one-based numbered, therefore the first element of the list is denoted by the number "1").
According to our implementation, a CityJSON file containing an LCC would contain the following properties: The "betas" property contains the β i 's of the darts. Every element of the list is an array of three integers which refer to the β 1 , β 2 and β 3 of the current dart, respectively. In the case this dart is i-free, β i is set to -1. Otherwise, the b i refers to the respective dart's index in the list.
The "parentCityObjects" and "vertices" properties are single lists. The first is composed of the IDs associated with the two-cells of each dart; and the second is composed of the index of the vertex, from the CityJSON's "vertices" list, associated with each dart's zero-cell.
The "semanticSurfaces" property associates the two-cell of a dart with a semantic surface of the city model. Every item of this list is an array of two integers, which refer to the indices of the geometry and the semantic surface, respectively, under the parent city object. If the two-cell of a dart does not have a semantic surface associated, then the value of the second value is set to −1.
The following is an example of a "darts" object that would represent an LCC with one triangle: 1 "+ darts ": {
CityObject to LCC Association
To be able to efficiently identify the three-cells that compose a city object, we added the "+lccVolumes" property in the "CityObject". The "+lccVolumes" property is a list of darts that belong to the respective three-cells. It has to be noted that not all darts related to a city object are stored in this list; instead, one dart per three-cell is used as an index. Therefore, the number of elements in the list should be equal to the number of volumes associated with the city object.
The following is an example of a city object which is associated to three volumes of the LCC.
Algorithm
To validate our proposed extension, we developed an algorithm that parses a CityJSON model and appends the LCC information to the model. In [12], we proposed two variations of an algorithm that topologically reconstructs a CityGML model. For the purposes of the research presented in this article, we worked on the basis of the "geometric-oriented" algorithm. We chose this variation because it results in a true topological representation of the model. In addition, our proposed extension does not lose semantic information as it associates two-cells of the resulting LCC with the city model. Therefore, even when multiple city objects are represented under the same three-cell, the semantic association is retained.
We introduced a number of modifications to the original algorithm in order to adjust it towards the requirements of our research. First, the algorithm had to conform with a flat representation of city objects and geometries, as described by the CityJSON specification. Therefore, the recursive call in Algorithm 2 has been removed. Second, we only have to define the essential information that ensures a complete association between city objects and cells, as described in Section 3.2. The resulting methodology is described by Algorithms 1-5.
We have to clarify that the purpose of the reconstruction is to describe the topological relationships between the geometries as described by the original dataset. No modification of the geometry is conducted during the reconstruction in order to ensure the topological validity of the resulting LCC.
Initially, the reconstruction is conducted by the main body (Algorithm 1). This function iterates through the city objects listed in the city model and calls ReadCityObject for every city object.
Function ReadCityObject (Algorithm 2) iterates through the geometries of the city object. For every geometry, function ReadGeometry is called and, then, the created darts' two-attributes are associated to the object's id and geometry's id.
The ReadGeometry function (Algorithm 3) parses the geometry by getting all polygons that bound the object. For every polygon, the algorithm iterates through the edges and creates a dart that represents this edge, by calling GetEdge. The newly created dart that represents the edge is, then, associated with the semantic surface of the polygon by assigning the respective two-attribute value. Finally, the algorithm accesses the items of index I 3 in order to find adjacent three-free two-cells, in which case the two two-cells must be three-sewed. This ensures that volumes sharing a common surface are going to have their adjacency relationship represented in the resulting LCC.
Function GetEdge (Algorithm 4) creates a dart that represents an edge, given the two end points of the edge. It gets one dart per point by calling the GetVertex function and then conducts a one-sew operation between them. Finally, it iterates through index I 2 in order to find adjacent two-free one-cells so that they can be two-sewed.
Finally, function GetVertex (Algorithm 5) is responsible for creating darts that represent a vertex in the LCC. The function iterates through the LCC in order to find existing one-free darts with the same coordinates as the point provided; if such a dart is found, it is returned. If no dart is found, the algorithm creates a new dart, associates the coordinates to its zero-attribute, and returns it.
Algorithm 1: Main body of reconstruction.
Input: city model cm to be processed Output: linear cell complex lcc that contains the geometry and semantics of the provided city model 1 R ← Get all root objects of cm; 2 I 2 ← An empty index of darts; 3 I 3 ← An empty index of darts; 4 lcc ← ∅; 5 foreach obj ∈ R do 6 ReadCityObject(lcc, I 2 , I 3 , obj); 7 return lcc Algorithm 2: ReadCityObject Input: linear cell complex lcc where the geometry of the city object will be added, index I 2 of 2-free darts in lcc, index I 3 of 3-free darts in lcc, city object obj to be processed Result: Updates the lcc and I variables according to the provided city object 1 G ← Get all geometries of obj; 2 g id ← 0; 3 foreach g ∈ G do 4 D ← ReadGeometry(lcc, I 2 , I 3 , g); 5 foreach d ∈ D do 6 2-attr-object(d) ← id(obj); 7 2-attr-geometry(d) ← g id ; 8 g id ← g id + 1;
Algorithm 3: ReadGeometry.
Input: linear cell complex lcc where the new two-cell will be created, index I 2 of 2-free darts in lcc, index I 3 of 3-free darts in lcc, geometry g that corresponds to a set of polygons Result: a new two-cell is created in lcc and I 2 and I 3 are updated accordingly Output: darts D that were used for the creation of the two-cell 1 D ← ∅; 2 P ← Get all polygons of g; 3 foreach poly ∈ P do 4 foreach v cur ∈ poly except the last do 5 v next ← Get next vertex of poly; • Finally, the main iteration repeats ReadCityObject for every city object. Following the same logic as before, GetEdge ends up being called for every vertex of the city model (v citymodel ). Therefore, the time complexity of the algorithm is of order O(v citymodel d lcc ).
Implementation
We implemented the proposed algorithms in computer software using the C++ programming language. We used the CGAL LCC package (https://doc.cgal.org/latest/Linear_cell_complex/index. html) for the data structure that keeps the topological information. JSON for Modern C++ (https: //github.com/nlohmann/json) by Niels Lohmann was used for CityJSON.
The tool created is a command-line application that is available under the MIT license in a public repository (https://github.com/tudelft3d/cityjson-lcc-reconstructor). It creates an executable file that can be provided with an existing CityJSON file and create an LCC that represents its geometry. The resulting LCC can be saved either as a CGAL C-Map file (.3map) or as a new CityJSON.
Validation of Methodology
To verify the completeness of our proposed methodology, we applied it to three open datasets and visualised their topological and semantic information.
Den Haag
This is a dataset of buildings and the terrain provided by the municipality of the Hague. The model was created in 2010 and is based on the aerial photos acquired and the registration of buildings (BAG (https://zakelijk.kadaster.nl/bag)) of that year. The dataset concerns around 112,500 buildings of the municipality of the Hague and the neighbouring municipalities, divided into 152 tiles.
We tested our methodology against the first tile of the dataset, which is available as example dataset for CityJSON (http://www.cityjson.org/en/0.9/datasets/). The file contains 2498 city objects, of which one is a TINRelief and the rest are Building objects. It contains 1991 LOD2 geometries of MultiSurface and CompositeSurface type, with semantic surfaces RoofSurface, WallSurface and GroundSurface present.
Delfshaven
This is the first version of Rotterdam's 3D city model, which was released as open data in 2010. It was created based on the basic topolographical map of the Dutch Kadaster (BGT (https: //zakelijk.kadaster.nl/bgt)) and LiDAR data for the extrusion of the buildings. It is divided into 92 files separated according to the municipalities neighbourhood administration boundaries.
In the research described in this article, we worked with the CityJSON file containing the Delfshaven neighbourhood. The file contains 853 building objects of LOD2 and a respective number of MultiSurface geometries with three semantic surfaces present: RoofSurface, WallSurface and GroundSurface.
In this dataset, the walls between adjacent buildings are missing. This causes multiple buildings to merge under one volume, topologically, instead of being individual volumes one next to the other.
Railway Demo
This datasets is a procedurally produced 3D city model with the intention to demonstrate most of CityGML 2.0 city object types. It is available in CityJSON format and contains 121 city objects of fourteen different types. It also contains 105 MultiSurface and CompositeSurface geometries with semantic surfaces.
LCC Viewer
We built a viewer to evaluate the topologically reconstructed CityJSON files (https://github.com/ liberostelios/lcc-viewer). Our viewer's source code is based on CGAL's demo 3D viewer of the LCC data structure. We added two features that we needed for our experiments: (a) the ability to load CityJSON files with an LCC; and (b) an option to render surfaces, thus objects, in three different ways: per volume, per semantic surface type and per city object id.
Reconstruction and Evaluation of Datasets
Using our software (Section 4.2), we topologically reconstructed the three datasets and created three CityJSON files containing the LCC according to the proposed extension (Section 3.2). The characteristics of the resulting datasets is shown in Table 1. The resulting datasets have significantly grown in size after the reconstruction. We identified that the main factor of growth is the number of darts, which seemed to contribute consistently on the added space of the resulting file. On average, the three datasets required about 65 bytes per dart.
To verify the complete representation of semantics and cells association, we inspected the final datasets in our viewer (Section 5.2). Every dataset was visualised with three different facet colouring methods: per individual volume, per semantic surface type and per city object id ( Figure 5). The per-volume facet formatting highlights the topological characteristics of the dataset. Using the per-city-object facet formatting, we verified that the association between the semantics of the dataset and the cells of the LCC are retained and complete. Finally, using the per-semantic-surface formatting, we verified that association between surfaces and its semantic information in the respective geometry of the original model were also maintained. The inspection proved that the proposed CityJSON extension is complete enough to provide the association between semantics and cells. The viewer successfully highlighted the datasets according to both topology (volumes) and semantics (city object ids and semantic surfaces).
During the inspection of the statistics and the visualisation, we identified a degenerate case. We would have expected the Delfshaven dataset to have fewer volumes (three-cells) than city objects, given that multiple buildings where merged during the reconstruction. Nevertheless, that did not occur as, according to Table 1, the number of three-cells was greater than the number of city objects. After further investigation of the model, we identified two reasons for this: first, a small number of the three-cells was composed by single facets which were topologically invalid with their surroundings, therefore they were not merged to the same volume as the rest of the surfaces of those objects; and, second, a great number of three-cells was "noise" in the LCC, as they were single edges without surface or volume ( Figure 6). We verified that those edges were present in the initial model as zero-area surfaces and they were retained in the resulting LCC. Figure 6. Topologically invalid surfaces and "noisy" single-edged volumes were identified in the Delfshaven LCC when the buildings of the area where hidden.
Conclusions and Discussion
In this article, we propose a topological representation for 3D city models by incorporating an LCC based on the C-Map data structure in the CityGML data model. We materialised this solution by developing an extension for CityJSON and the respective algorithms that can compute the topological links based on the geometry of an existing dataset. Furthermore, we implemented this solution in computer software and applied it to three open CityJSON datasets to evaluate the completeness of the proposed solution.
We incorporated the C-Map data structure as its implicit representation of cells, through darts, is optimal with regard to storage space. As a consequence of its implicit representation, a C-Map cannot be associated with city objects according to the suggestions of ISO19109. This is because ISO19109 assumes that the topology of a feature (i.e., city object in this context) is described through an explicit representation, such as the one described in ISO19107.
We need to clarify that Algorithm 3 does not factor holes (i.e., internal rings) in during the reconstruction. This is because C-Maps cannot represent holes of two-cells. A potential solution would require altering the geometries (e.g., by inserting an edge between outer and inner rings of a polygon). Nevertheless, such an operation is outside the scope of this research.
In addition, the algorithms provided do not distinguish geometries according to their LoDs. This means that, in the case of a multi-LoD dataset, the resulting LCC will represent the geometries of different LoDs. While in some cases this might not be an issue (e.g., if different objects have different LoDs), in cases where one object has more than one LoDs, the resulting LCC would contain both versions. Such an LCC can be perceived as a wrong representation of the city model's geometry. To deal with these cases, in our implementation, we allow users to filter the geometries by LoD. Nevertheless, this option is not present in Algorithm 2. This is because we believe that every application might have different requirements, therefore we leave it to one's discretion to alter the respective algorithm according to their needs.
Furthermore, as mentioned in Section 4, the algorithm does not modify the original geometries in order to ensure the topological validity of the resulting LCC. That is because the purpose of this methodology is to represent the topological relationships of objects exactly as described by the original dataset. For example, two adjacent buildings sharing the exact same common wall will be represented as three-linked in the resulting LCC, but that is not the case if they partially share a surface. That would require some kind of a topological clean-up operation, which would ensure the topological validity of the input. Nevertheless, this is outside the scope of this research and can be further researched independently.
Our results show that it is possible to represent any CityGML dataset based on the C-Map data structure without missing semantic information from the original dataset. In addition, the proposed two-way linking mechanism between the entities of the data model and the LCC provides access to the efficient resulting 3D city model based on either a semantic-oriented traversal (by iterating through every city object of the model) or a geometric-oriented traversal (by visiting all darts of the LCC).
We believe that our solution can provide useful information for the geometrical processing of 3D city models. For example, it can assist on the repair of invalid geometries, such as non-watertight solid, based on the existence of two-free darts in the LCC. In addition, our findings regarding "noisy" three-cells in the Delfshaven dataset (Section 5.3) proves that LCC statistics can provide useful insights for the identification of invalid or erroneous data. However, repairing of invalid geometries is outside the context of this research. Therefore, overlapping cells should be expected in the resulting LCC, as soon as the source 3D city model geometries are not topologically valid.
In the future, we intend to utilise this CityJSON extension in order to conduct analysis on the topological matching of existing multi-LoD datasets. We are also planning to use the proposed data structure in order to represent those datasets in four dimensions. | 8,720.6 | 2019-05-05T00:00:00.000 | [
"Computer Science"
] |
Heat Recovery from Wastewater—A Review of Available Resource
: The EU Directive 2018/2001 recognized wastewater as a renewable heat source. Wastewater from domestic, industrial and commercial developments maintains considerable amounts of thermal energy after discharging into the sewer system. It is possible to recover this heat by using technologies like heat exchangers and heat pumps; and to reuse it to satisfy heating demands. This paper presents a review of the literature on wastewater heat recovery (WWHR) and its potential at different scales within the sewer system, including the component level, building level, sewer pipe network level, and wastewater treatment plant (WWTP) level. A systematic review is provided of the benefits and challenges of WWHR across each of these levels taking into consideration technical, economic and environmental aspects. This study analyzes important attributes of WWHR such as temperature and flow dynamics of the sewer system, impacts of WWHR on the environment, and legal regulations involved. Existing gaps in the WWHR field are also identified. It is concluded that WWHR has a significant potential to supply clean energy at a scale ranging from buildings to large communities and districts. Further attention to WWHR is needed from the research community, policymakers and other stakeholders to realize the full potential of this valuable renewable heat source. Existing papers in the literature have studied different elements of WWHR in different settings, such as heat recovery at different locations within the sewer system, with heat exchangers, with a combination of heat exchangers and heat pumps, examining the cost-effectiveness, and environmental impacts, and so forth. However, a synthesis of the potential of heat recovery across the sewer system’s entire water cycle, from leaving the drain of a building to discharge into water bodies after treatment, is missing. Such a synthesis would be essential to present the complete picture of energy availability in wastewater in the sewer system and the advantages, disadvantages, and challenges of its exploitation in different locations, given its recent recognition as an essential renewable heat source. This paper presents a comprehensive review of the literature on WWHR at different points along the sewer system, from user discharge to water treatment. The review outlines state-of-the-art in sewer water temperature dynamics, the environmental impact of WWHR and the legal regulations involved. Heat exchanger design, type, configuration, performance and fouling in WWHR settings are not considered in this paper, as other review papers have addressed these The review is systematically the
Introduction
The European Union (EU) has established a target to achieve a 40% reduction in greenhouse gas (GHG) emissions (from 1990 levels), and the renewable energy share of the union should be increased to 32% of total energy generation [1]. Under the regulation on the governance of the energy union and climate action (EU/2018/1999), EU member states have established a 10-year integrated national energy and climate plan (NECP) [2]. Under these circumstances, renewable sources of energy have been of great interest in recent years. The EU Directive 2018/2001 [3] specified wastewater as a renewable heat source in compliance with the European environmental goals. Moreover, under the European Green Deal Investment Plan, member states will be provided supportive aid to implement measures like the re-use of waste heat [4].
The wastewater in domestic, industrial, or commercial buildings maintains considerable thermal energy quantities, which is discharged to the sewer system with temperature ranging from 10 to 25 • C. It is estimated that 6000 GWh per year of thermal energy is lost in sewers in Switzerland [5], equivalent to 7% of country's total heating demand. Wastewater in sewer pipes in Germany is estimated to contain enough energy to heat 2 million homes [6]. This resource can be exploited through heat exchangers and heat pump technologies, applied at different points in the sewer system, from end-user to water treatment, that is, at the component level, in buildings, in public sewers, and at WWTPs. These locations have their respective advantages and disadvantages concerning their energy, economic and environmental prospects. For example, low heat loss is present at component level WWHR, while greater heat density is available at WWTPs.
Existing papers in the literature have studied different elements of WWHR in different settings, such as heat recovery at different locations within the sewer system, with heat exchangers, with a combination of heat exchangers and heat pumps, examining the costeffectiveness, and environmental impacts, and so forth. However, a synthesis of the potential of heat recovery across the sewer system's entire water cycle, from leaving the drain of a building to discharge into water bodies after treatment, is missing. Such a synthesis would be essential to present the complete picture of energy availability in wastewater in the sewer system and the advantages, disadvantages, and challenges of its exploitation in different locations, given its recent recognition as an essential renewable heat source.
This paper presents a comprehensive review of the literature on WWHR at different points along the sewer system, from user discharge to water treatment. The review outlines state-of-the-art in sewer water temperature dynamics, the environmental impact of WWHR and the legal regulations involved. Heat exchanger design, type, configuration, performance and fouling in WWHR settings are not considered in this paper, as other review papers have addressed these [7][8][9].
The review paper is systematically divided into WWHR at different levels along the water cycle addressing the benefit and challenges of each. The interested stakeholder can gain important insights into the level of heat recovery they are interested in. Furthermore, the work also highlights other important aspects of WWHR, such as economic analysis and environmental impact, which can assist the policymakers in making relevant decisions. For the research community, the work emphasizes the research gaps in the field and suggests possible avenues for future research.
Methodology
In identifying the sources for the literature review, an initial search with Google Scholar search system was performed to understand the available literature with the following keywords-wastewater heat recovery, drain water heat recovery, and wastewater source heat pump. The majority of the articles from Google Scholar search were found to be published by Elsevier. Therefore, the ScienceDirect database that contains full-text articles from journals and books published by Elsevier, and the Scopus database were used to narrow down the literature search. Furthermore, the artificial-intelligence-based search system Semantic Scholar was also used with the same keyword search. The search resulted in similar research data with some additional articles and dissertations. In addition to the databases and search systems, some literature was identified using the well-known snowball method, that is, to look for research data in the bibliography and citations of the primary literature.
The entire database of 154 research items was prepared, which included research articles, dissertations and technical reports. The abstract and summary of each research item were carefully analyzed to narrow down the results that fit the purpose of this literature review. The research data were further divided into categories based upon the structure of the present work, such as research items belonging to different levels of WWHR, investigating wastewater temperature and flow modelling, addressing environmental impacts and so forth.
Wastewater Heat Recovery
Wastewater is "used water from any combination of domestic, industrial, commercial or agricultural activities, surface runoff/stormwater, and any sewer inflow/infiltration" [10]. Almost 20% of energy use in the domestic sector is associated with water heating for various purposes (e.g., shower, bath, dishwasher, washing machine, cooking, etc.) [11]. Wastewater, therefore, contains a significant amount of heat energy that can be recovered and used for preheating the cold-water supply in a building or space-heating, depending upon the magnitude of heat available. The heat embedded in wastewater depends upon its temperature and flow rate. The content of available heat for recovery from wastewater can be calculated using the heat transfer equation: where,Q is the recovered heat content per unit time,ṁ is the mass flow rate of wastewater, c p is its specific heat capacity, ρ is the density of wastewater, and ∆T is the temperature change of wastewater due to heat recovery. As per Equation (1), a higher flow rate and temperature of wastewater results in a higher potential for heat recovery. Independent from the heat recovery system type, two main components may be involved in WWHR: a heat exchanger and heat pump. Although in some situations, heat may be recovered only using a heat exchanger, typically when the average temperature of the wastewater is higher than 30 • C (domestic shower settings). A heat exchanger is a device that facilitates the transfer of internal thermal energy between two fluids while avoiding the mixing of the two. It is a passive technology that does not require any external energy source. Some commonly used heat exchanger types in WWHR applications include double-pipe parallel flow, double-pipe counterflow, shell-and-tube, and plate-and-frame heat exchanger. Heat exchanger types and their performance in WWHR applications are discussed in detail by Culha et al. [9].
A heat pump is a technology that uses electricity and the reverse refrigeration cycle to transfer heat from one place to another [12]. Heat pumps require a low-temperature energy source and convert this low-grade heat to usable heating energy by mechanical work. In the context of WWHR, wastewater serves as the low-grade heat source for the heat pump. A heat pump can work in both heating and cooling modes. In cooling mode, the ambient air of the space which needs cooling acts as a source, and heat is extracted from it, leading to space-cooling. Heat pump types and their performance in WWHR applications are discussed in detail by Hepbasli et al. [7].
Energy Recovery Options
There are four main possible locations within the sewer system for energy recovery from wastewater: (i) at the component level; (ii) at building level; (iii) in the sewer pipe network, and (iv) from WWTPs. Figure 1 shows these possible options of WWHR.
Heat Recovery at Component Level
At this level of WWHR, heat is recovered from wastewater directly after it is produced in specific activities relating to a single component (e.g., showering, cooking, food processing, etc.). Heat is extracted using a heat exchanger directly after the component used in the activity. The recovered heat may be used to preheat incoming cold-water, as in domestic or commercial shower facilities, or be used in conjunction with a heat pump for other purposes. Figure 2 shows the basic working principle of a vertical counter-flow heat exchanger. Shower water heat recovery is the most common application seen in practice at this level. This application has the advantage of a continuous simultaneous counter flow of wastewater and incoming cold-water supply for use in the shower. Therefore, heat recovered here can be achieved with high effectiveness, and there is no time lag present between waste heat availability and heat demand for showering, eliminating the need for heat storage and resulting losses [14]. A general schematic of shower water heat recovery is shown in Figure 3. In real applications, heat exchangers are placed under the shower tray in either horizontal or vertical orientations. Both have their respective advantages and disadvantages. In the vertical configuration, the wastewater is discharged as a falling film flow, whereas in the horizontal orientation, water flows at the bottom of the pipe. Therefore, the effective surface area over which heat is exchanged is larger in the vertical orientation, which leads to higher efficiency [16]. Wong et al. [17] analyzed shower water heat recovery in high-rise residential buildings in Hong Kong. The study investigated the installation of a single-pass counter-flow heat exchanger installed beneath shower drains in the horizontal configuration. The result showed an annual energy saving of 4-15% from shower water heat recovery. The savings were dependent upon drainage pipe diameter and length, which governs the effective area of heat exchange surface.
An experimental study conducted at the Canadian Center of Housing Technology analyzed the performance of five vertical heat exchangers for WWHR at component level [13]. The results showed a 9% to 27% reduced natural gas usage for hot water preparation in domestic shower applications. The energy savings increased with the number of showers and lower temperature of the incoming cold-water supply. Another critical factor is the configuration of the WWHR system. The energy savings are higher when recovered energy is used to heat the water flowing into the hot water tank and shower valve, compared to heating only the water entering the hot water tank. Similar results for such configurations have been observed by Tomlinson [18]. Tomlinson [18] argued that the overall water flow is balanced in the first configuration, that is, flow on both sides of the heat exchanger are equal, resulting in higher heat recovery.
On the other hand, a major disadvantage of vertical heat exchangers is the large space requirements for their installation, requiring around 1-2 m of vertical space below the shower tray. In many existing buildings, such facilities are not possible due to space limitations. Besides, longer heat recovery units are preferred for better efficiency, which is more expensive and space-consuming. Consequently, researchers have also investigated improving the horizontal heat exchanger's design to achieve higher effectiveness [19,20]. McNabola and Shields [20] proposed a new horizontal WWHR heat exchanger design to maximize the heat exchange between wastewater and cold-water supply. The design was based upon placing the cold-water supply pipe into the wastewater pipe, thus increasing the contact surface area. The results showed that the effectiveness of over 50% could be achieved for the proposed design, which is comparable to some existing vertical WWHR units. Currently, several proprietary horizontal WWHR heat exchanger also exists in the market with similar effectiveness to vertical WWHR heat exchangers [21].
Another study investigated the possibility of using a storage-type WWHR unit [22]. In this case, wastewater is stored in an insulated steel tank and cold-water pass through a copper coil immersed into wastewater. The proposed system is smaller in height than a vertical WWHR unit and can recover 34% to 60% of available energy in wastewater. However, such an implementation would not be economically viable due to the additional tank and insulation cost. Such a design could be successful combined with a heat pump system [23][24][25].
For overall viability of a WWHR component level application, it is vital to consider the financial analysis of the system [26]. The economic viability of the WWHR unit not only depends upon heat exchanger characteristics but also can depend upon user behaviour like shower length, shower head flow rate, desired water temperature, number of showers and so forth [27]. Słyś and Kordana [28] analyzed the effect of shower length and shower head flow rate upon the payback period and net present value (NPV) of a vertical WWHR heat exchanger. The study observed that with increases in shower length and shower head flow rate, that is, higher water consumption, NPV of the WWHR system increases. The considered WWHR unit's payback period, under the same flow rate conditions, decreased by 66% with an increase in shower length from 5 to 12 min. The study concluded that WWHR units could have significant financial savings in dwellings with large amounts of water usage.
Apart from additional space requirements for WWHR, another challenge is that such installations can be cost-effective only when fitting a new bathroom or renovating an existing one. The additional cost of retrofitting the WWHR system in the existing bathrooms and changing the water pipe infrastructure for the sole purpose of heat recovery can increase the device's overall payback period.
Another major issue of the WWHR system is the fouling of the heat exchanger. Fouling is the accumulation of unwanted deposits on the surface of heat exchanger [29]. It increases the heat exchanger's thermal resistance, generally referred to as fouling resistance. Due to fouling resistance, the heat exchanger's heat capability decreases, leading to fewer energy savings. Fouling also adds additional maintenance cost to the device. Evidently, the horizontal heat exchangers are more prone to fouling than vertical heat exchangers due to the continuous build-up of unwanted deposits on the horizontal plate. There is a lack of information on fouling and maintenance of horizontal WWHR units for showers. One study performed by a horizontal WWHR unit manufacturer observed that after applying a large amount of shampoo, soap and hair-conditioner on the heat exchanger surface and keeping it overnight, there was a performance drop of 5.5% for only first 10 min for the device [30]. The study also reported the same performance for the device over a two years interval. Future work is required to examine the fouling performance of a range of heat exchanger types for shower applications.
Apart from the shower systems, wastewater heat can be recovered from other components, such as dishwashers and washing machines. These components discharge wastewater at fairly high temperatures; for example, exit water temperature for a typical household dishwasher ranges from 19 • C to 61 • C depending upon the washing stage [31]. The water consumption during the whole washing cycle is around 33 l. Paepe et al. [31] proposed a storage-based heat recovery system for domestic dishwashers and analyzed its technical and economic performance. The study showed a 25% reduction in total heating demand from the dishwasher with a payback period of 6 years.
Large scale components such as dishwashers in industrial kitchens and washing machines in launderettes use a large amount of hot water thus can have significant potential for WWHR. Wemhoff et al. [32] analyzed the viability of WWHR for dishwashers in a university's dining facility in Philadelphia. The study calculated a payback period of 2 years for the proposed installation of a 146 kW shell-and-tube heat exchanger. The annual source pollution reduction of 13 kg SO 2 , 6.5 kg NO x and 6.5 metric tons CO 2 was shown.
Adhikari [33] performed a feasibility study to examine WWHR in a public laundry facility with two washing machines with average wastewater temperature and flow of 60 • C and 0.11 kg/s. The study reported 103 MWh/year of heat recovery from laundry facility's wastewater leading to almost e6371 of economic savings.
A myriad of other hot wastewater generating components exist in domestic, commercial and industrial settings, aside from showers, dishwashers or washing machine, for example, sinks in hair salons, swimming pools/spa applications, food processing, and so forth. These have received scant attention in the literature to date.
Heat Recovery at Building Level
At this level, heat is recovered from the collective wastewater discharge from a single whole building is considered. The wastewater flow and temperature characteristics of this discharge depend upon building type. Wastewater in domestic building can maintain a temperature of 10-25 • C over the year [23]. The energy savings from WWHR at the building level can be higher when compared to the individual component level due to the higher volume of wastewater and accumulation of multiple hot water activities [28]. However, discharge at the building level also includes cold wastewater in the mix, which reduces energy potential. At this level, to perform heat recovery, wastewater is commonly collected in a holding tank, and heat is recovered using a heat exchanger [33,34] or water source heat pump [23][24][25]. For fouling prevention, usually a grease trap system is used to intercept debris in wastewater. If the recovered heat is not immediately used, it can be stored in a hot water tank (HWT) for later use. A general schematic diagram of WWHR with a heat pump is shown in Figure 4. Researchers have investigated the potential of WWHR at the building scale with numerical feasibility studies, and experimental labscale approaches [24,25,[35][36][37][38][39].
Residential Dwellings
In residential houses, complexes and apartments, wastewater amounts depend upon the number of residents in a dwelling and their water-usage activities. Due to the mixing of wastewater from different water sources in the building, including cold-water equipment, wastewater temperature is often lower than it can be at the individual hot water component level [40]. Ni et al. [24] presented a feasibility study for WWHR in residential developments with a heat pump and evaluated the total source energy consumption of the building for space heating and hot water heating with and without WWHR. The results demonstrated a 17-58% reduction in total energy usage depending upon the building's location. However, the study lacked any economic analysis of system installation, which is important for assessing financial viability and its likely penetration in practice.
Heat pump technology can be quite expensive and may not be financially suitable for single-family residences. Furthermore, wastewater flow has been found to be insufficient and containing too much variation in single residences to make such system economically efficient [41]. Spriet and McNabola [23] presented a numerical feasibility study for WWHR with heat pump in a typical residential house (Ireland). The study observed energy savings of up to 42%, similar magnitude to the findings of Ni et al. [24]. However, despite these impressive savings potential, due to the high capital cost of the technology, the Levelized Cost of energy increased by 120-130% compared to traditional heating systems, thus making the system not economically viable.
A reference guideline for economical use of WWHR with a heat pump at the building level is to have wastewater flow of at least 8000 to 10,000 L/day (equivalent to 60 people or 30 residential units) to be economically feasible [41]. Therefore, WWHR can have significant potential in residential buildings with a large number of residents, such as apartment buildings, multifamily complexes, hotels or student accommodation, and so forth. Alnahhal and Spremberg [40] presented a WWHR feasibility study for a 330 room student hostel in Berlin. Based on the wastewater flow, wastewater temperature and supply temperature measurements, the study showed a 30% reduction in hot water heating demand for the considered heat exchanger and heat pump specifications.
Non-Residential Facilities
Non-residential buildings like sports complexes [34], public showers [42] and commercial washing facilities [33] also produce large amounts of wastewater. Such facilities have stable heating demand throughout the year and have specific operation time; therefore, heat use and elimination are concentrated during those times. Ip and She [34] conducted an experimental study at a two-storey sports pavilion in the University of Brighton, UK. In this study, each shower was equipped with a single heat recovery pipe-vertical configuration for the first floor and horizontal configuration for the ground floor. The study demonstrated that the incoming cold-water supply could be preheated up to 10 • C with WWHR, and a potential weekly saving of 40-119 £ was shown.
Liu et al. [42] presented a numerical study for WWHR system with a heat pump in a public shower facility. The analysis was applied to a real building case with 50 shower nozzles. The study concluded that the total cost (initial and operating) of the WWHR system was 12% and 39% of the cost if the oil boiler and coal boilers were used, respectively. The economic saving shown in this study were significantly higher than those found in a single residential building. The possible explanations could be the facility's high wastewater volume, the assumption of an unrealistic shower time of 35 min per person, and disregarding the maintenance cost in calculations.
Other types of non-residential buildings such as offices, hotels [43], hospitals [44] and commercial kitchens/restaurants [45] also maintain good potential for WWHR. Again, the amount of energy recovery depends upon the wastewater characteristics in the building.
Baek et al. [36] performed a feasibility study to analyze the possibility of using wastewater as a heat source for a heat pump at a hotel sauna service in South Korea. The study results predicted a yearly mean operating COP of the heat pump at 4.5-5.0 and reported that the heat pump could meet 90% of hot water demand. Spriet and McNabola [45] considered the installation of heat exchangers for WWHR in commercial kitchens in different hospitality sectors in the UK. This analysis reported a total financially feasible potential of approximately 1.24 TWh/yr from WWHR in the hospitality and foodservice sector in the UK, with the largest potential in health care outlets due to their large water consumption.
Remarks
There are several important issues which require more attention from the research community at the building level. These issues are crucial for the practical implementation of WWHR systems in actual buildings. They fall into the categories of: (i) resource potential, (ii) practical implementation, and (iii) WWHR operation. Considering the potential size of the waste heat resource available at building level, there is a significant body of evidence for domestic buildings and a limited amount for certain commercial activities such as restaurants/cooking, launderettes, and so forth. However, there is limited evidence on the waste heat potential at buildings containing industrial activities (e.g., brewing, food processing, power production, cooling, etc.). More information is needed on heat availability at commercial buildings (hairdressers, hotels, restaurants, leisure centres, spas, etc.). Information on the amount of wastewater heat and the variation in this across seasons, climates, and building/business size and type is a crucial gap in identifying the heat inputs in the sewer system as a whole.
Practical implementation issues are often neglected in WWHR investigations of this nature. For example, the issue of the distance between the waste heat source and the existing heating facility or incoming cold water in a building, which often do not coincide, can be barriers to implementation. Retrofitting existing buildings with WWHR systems, in general, is a challenging issue that may often prevent the resource from practical exploitation. In some cases, the required space may not be available for such installations; in others, the water piping infrastructure has to be considerably altered, leading to additional costs.
Studies also often neglect the issue of the maintenance of WWHR systems which is evidently prone to fouling due to wastewater characteristics. Maintenance incurs a cost and also implies the reduced performance of heat exchangers and heat pumps with lower effectiveness and COP over time. Another issue mentioned by Spriet and McNabola [23] is that studies often present only high-level analyses of the potential of WWHR with inherent assumptions on the spatial and temporal variability of heat availability and demand and system performance. Assuming instantaneous consumption of waste heat and not considering the temporal mismatch between heat recovery and heat consumption overestimates the impact of WWHR. Assuming constant system performances regardless of waste heat temperature and flow, and neglecting the impact of fouling, similarly overestimates the impact of WWHR. Finally, there is also a lack of real-world or experimental demonstrations of WWHR in the literature at the building level. Most studies are theoretical in nature, and more research is required to investigate real-world applications.
Heat Recovery at Sewer Pipe Network Level
WWHR from raw wastewater in public sewer pipe systems is a promising source of energy. Wastewater flow in sewer pipe systems is abundant and continuous throughout the year with yearly temperature of 10-20 • C, making sewer pipe wastewater an ideal source of heating/cooling for heat pumps throughout the year.
Sewer Pipe Heat Recovery
There are two possible ways of WWHR from the sewer pipe system; the first is to install a heat exchanger in the pipe bed as seen in Figure 5a and using heat pump for pumping this energy into a centralized heating system. The second option is to install an external heat exchanger above ground level. for this, a portion of sewer water flows into a screen to retain coarse solids, and the pre-screened wastewater is then pumped to the heat exchanger above the ground (Figure 5b). The heat exchanger is further connected to a heat pump evaporator. Both types of installations have been in operation in various European countries like Switzerland and Norway for many years [5] with thermal power ratings of installations ranging from 10 kW to 20 MW. In first option, the heat exchanger can either be separately placed into the sewer pipes bed, or a sewer pipe can be manufactured explicitly with an integrated heat exchanger in the sewer pipe wall. The advantage of this option is that no additional space is required for the installation. Also, wastewater does not have to be pumped out from the sewer. However, the system's maintenance is also not straight-forward, which may require permits since the temporary operation of the sewer line is suspended during maintenance. Further, some design preconditions need to be met; for example, minimum sewer diameter of 800 mm, minimum wastewater flow of 30 L/s and minimum wetted surface 0.8 m 2 per meter length of sewer line [5]. Another major issue with such heat exchangers is fouling due to biofilm formulation and sediments in wastewater leading to fouling resistance, which can reduce the heat transfer efficiency of the heat exchanger by up to 50% [46]. Therefore, frequent maintenance and cleaning are required to remove any biofilm build-up.
For second option, the main advantage is independence of the system from the main sewer line; therefore, minimal interference with sewer operation. The installation and maintenance of the heat exchanger are easier. The cross-section and slope of the sewer system are irrelevant to the design specification of the system. However, the system requires pumping power to transport pre-screened wastewater to the heat exchanger, which increases overall energy demand. Also, additional space is required for such an installation. The heat exchanger is less prone to fouling because of the pre-screening of wastewater. In some cases, separate devices are installed to avoid and remove suspended solids, and feculences [47,48].
Energy Savings and Economical Potentials
Preliminary feasibility studies have shown the significant energy saving potential of WWHR in sewer pipe networks. For example, Cipolla et al. [49] took flow and temperature measurements in sewer system of Bologna, Italy and demonstrated a thermal power potential of 74 kW with 3 L/s of wastewater flow and a 5.9 • C temperature drop. Sievers et al. [50] showed WWHR could provide 554 MWh/year with wastewater temperature drop to 10 • C in Hamburg, Germany.
Studies based upon existing WWHR systems in different parts of the world also present strong evidence of significant energy saving potential with up to 50% reduction in primary energy consumption [47,48]. Guo and Hendel [51] presented a field performance study of a WWHR system for a low carbon district in a suburb of Paris. The study results observed that the heat pump system supplied around 75% of heating energy throughout the year. The share of recovered energy represented 30-40% of monthly primary energy consumption. The installed system had four times less CO 2 emissions than gas-only heating supply.
Apart from energy savings, the economic benefits of a WWHR system are an important factor in realizing the technology's overall viability. WWHR technology for sewer systems has not only high investment costs but operating, and maintenance costs are also significant. The economic success of such systems is dependent upon energy savings brought by the installation, which depends upon the following key points [5,52]: • The payback period of WWHR will be higher when replacing a expensive fuel source compared to an inexpensive one. • WWHR systems are more financially successful when heat demand is available throughout the year. The use of WWHR for longer periods results in more energy savings and decrease the payback period. • Electricity is used in WWHR systems for heat pump compressor operation and other auxiliary components like pumps and accounts for the heat pump's operational cost.
Low electricity prices will improve the economic feasibility of the system. • WWHR systems are more financially successful at sites where a new heating/cooling system is being constructed or an existing installation is due to be replaced, decreasing overall investment capital and avoiding retrofitting costs. • The distance between consumer and heat recovery location also influences the economical use of the WWHR system. The higher the distance between the two results in higher heat losses during transport, leading to greater operational costs.
Some research studies from literature are discussed in the context of the above points. Hrabová et al. [53] performed an economic assessment of a WWHR system comparing the overall cost of the system with a gas and electric boiler. WWHR achieved higher economic savings when replaced the electric boiler due to its high operational cost.
Pamminger et al. [54] performed a desk study to analyze sewer heat recovery potential in Melbourne, Australia. The study observed that current natural prices were too low and would have to increase by around 162% for commercial feasibility of the system.
The feasibility study presented in [55] highlighted the economic viability of WWHR for small scale neighbourhoods. Preliminary cost analysis in the study showed that the total cost of the WWHR system was 60% higher in comparison to a conventional gas boiler system at considered natural gas and electricity prices (The Netherlands).
While WWHR in sewer pipe systems is beneficial in terms of reducing the energy consumption and GHG emissions, on the other hand, it could negatively influence the treatment operation of wastewater in WWTP. This is because the wastewater temperature drops when wastewater heat is utilized; consequently, the temperature of influent wastewater in a WWTP may also reduce. This could lead to a reduction in the nitrification capacity of WWTP, which is a temperature-dependent process. Different countries have defined guidelines to ensure the efficient operation of WWTPs. For example, in Switzerland, the minimum temperature of influent wastewater needs to be 10 • C [56].
Heat Recovery at WWTP Level
Another critical location for energy recovery from wastewater is at WWTPs. There are three possible heat recovery points at a WWTP-from raw wastewater before treatment, from partially treated water within the WWTP, and from effluent discharge after treatment. Heat recovery from raw wastewater is similar to recovering heat from within the sewer pipe system. At the WWTP, the influent's temperature and the available energy are highest, offering the greatest potential for WWHR. However, the water quality is the lowest, offering significant technical challenges in exploiting it. As we move through the WWTP, the temperature reduces and the water quality increases, whereby at the effluent discharge, the technical difficulty of recovering heat from raw wastewater is removed at the cost of a lower temperature fluid.
WWTPs process and treat large amounts of wastewater from sewers and then discharge into near water bodies on a daily basis. This cleansed water temperature is stable and has low daily and weekly variations compared to the influent temperature [5,57]. Even in wet weather, the effluent temperature does not have erratic variations like the influent temperature [5].
The potential of WWHR in treated wastewater is higher than sewer pipe wastewater since the downstream water from WWTP can be cooled down to much lower values [5] and effluent water flow is higher. Relatively low variation in water temperature improves the performance of heat pump systems. Since the water is already treated, another advantage is lower bio-fouling and solid matter interference with the heat exchanger, thus improving heat transfer efficiency. However, since the heat consumers are usually not located near WWTP, a major disadvantage of this energy recovery option is that the heat supply must be transported over long distances, thus leading to high heat losses.
Energy consumption in WWTPs is large, for example WWTPs contribution to primary energy consumption in the US is 0.8% of national energy consumption [58]. Most of the energy consumption is in the form of electricity. On the other hand, WWTPs have a significant amount of energy at their disposal, coming from raw and effluent water and other processes like electricity generation from the incineration of biogas produced from anaerobic digestion and bio-solids [59]. Therefore, WWTPs can be considered as regional energy cells which can deliver energy into local energy supply networks (heat and electricity) [60]. However, with heat supply, the feasibility of such energy cells depends on consumer distance and the treatment capacity of the WWTP. Also, a major constraint for recovered heat usage is that a local district heating network should be present to which recovered heat can be injected. Therefore, on-site consumption can be preferred in such cases to heat buildings in the WWTP precinct.
Regarding the energy potential, heat available in effluent at WWTP is substantial due to high water volumes. Other than water amount, energy potential depends upon the temperature drop of effluent after energy recovery. Based upon different studies in the literature, the temperature drop can be up to 8 • C; however, it depends upon the lower temperature limit allocated by relevant environment protection organizations to protect the ecology of receiving waters. Ðurdević et al. [61] presented a theoretical case study to analyze the utilization of WWHR in the city of Rijeka, Croatia. The location considered for the case study had a WWTP in operation with a capacity of 540,000 PE and 3000 L/s of effluent water at maximum load. Based upon considered water flow and a temperature drop of 6.5 • C, 75 MW of heat recovery potential was shown, which was 72% of the existing natural gas power heat plant capacity. The study also analyzed the COP of the proposed heat pump, which decreased from 4.7 to 2.87 with an increase in condensation temperature from 60 • C to 90 • C.
As previously mentioned, the delivery of recovered heat from a WWTP to consumers could be impractical because of the high heat losses; or in some situations, a district heat distribution network may not be present. Therefore, in such cases, on-site usage of recovered heat can be more sensible. However, in these cases, only a fraction of available heat in wastewater is used. For example-Chae and Kang [62] performed a study to estimate the energy independence of WWTPs with three different energy resources-photovoltaic panels, small-hydropower and WWHR. Under the considered design conditions for photovoltaic panels, hydropower turbine, and heat pump, 6.5% of total energy consumption was estimated to be covered by the proposed energy resources. Heat recovery from wastewater had the highest potential among them, at around 3.65%. The payback period of the heat recovery system considered in the study was estimated to be 6.8 years. Even though the recovered heat supplied all of the heating demand required by the buildings in the WWTP precinct, the heating demand was only limited to 2.2% of available heat (temperature drop = 3 • C) in effluent water. Therefore, much of the available heat in effluent water remained unexploited.
Apart from supplying the space-heating and hot water demand of WWTPs, recovered heat can also be used for low-temperature treatment processes. Pochwala and Kotas [63] presented such a case study. The heat was recovered from raw wastewater and used to heat an on-site building to raise the temperature of a sequential biological reactor (SBR) to the optimum value for the treatment process. The recovered heat supplied 98% of the heat demand of WWTP.
It is evident that heat recovery from effluent water at WWTPs is an abundant source of energy supply. However, to realize the full potential of this form of energy recovery, a local district heating/cooling network is necessary. Most existing heat recovery practices from WWTP effluent around the world are large scale capacity heat pumps and supply heating/cooling to district heating, and cooling networks [5].
Summary of Analyzed Studies
The studies analyzed in the present paper focusing upon WWHR at different levels are summarized in Table 1. The table categorizes the analyzed studies based upon energy recovery level, technologies used, approach of the study and analyzed aspects of WWHR. It can be seen in Table 1 that only a few studies have considered the environmental impacts (emission analysis) of WWHR. Many studies do not account for the economic cost analysis of proposed WWHR systems. The majority of the studies are numerical based studies analyzing the potential of energy savings based on wastewater flow and temperature measurements. Evidently, at the component level, only heat exchangers are used as the heat recovery technology due to the economic in-feasibility of heat pumps in this case. Further, Table 2 summarizes different WWHR technologies at various level of heat recovery and present their main characteristics.
Wastewater Temperature and Flow Characteristics
Wastewater flow and temperature characteristics are crucial for determining WWHR potential. Higher values of flow and temperature lead to higher heat recovery. The temperature of wastewater is highest at the component level since heat loss to the environment is low. As wastewater streams from component to building drain and sewer, heat is lost to the environment, and temperature decreases. For wastewater flow, the amount of wastewater produced is also relatively low and highly fluctuating at the component level and building level. At the sewer pipe level and WWTP level, wastewater flow rates are higher and more stable throughout the year.
Building Level
Wastewater flow and temperature characteristics differ for different types of buildings, such as buildings containing public showers (e.g., leisure centres), residential buildings, restaurants/kitchens and so forth. For residential settings, measurements and estimations from different studies are available for wastewater flow and temperature values. Meinzinger and Oldenburg [79] reviewed more than 130 studies spanning over 20 countries and reported that wastewater volume in a household can range from 69-150 L per day (d) per capita (c) with a median value of 110 L/c.d. Regarding the domestic wastewater temperature, it can vary from 16-38 • C as reported by Heinz et al. [80]. Alnahal and Spremberg [72] conducted a study in which they measured the wastewater temperature for a student hostel in Berlin for one month period. The daily average wastewater temperature varied from 11 • C to 20 • C with an average of 15 • C.
Wastewater temperature and flow rate are closely related to end-use water temperature and flow rate. If information about end-use is available, wastewater flow and temperature can be estimated based upon that. Stochastic models, such as SIMDEUM ® [81], and the WaterHub framework [82], have been successfully applied to predict wastewater flow and temperature from households and buildings [82,83]. The Building America Research Benchmark Definition [84] provides a general model for end-use hot water consumption, described in Table 3. N br represents the number of bedrooms in the dwelling. Figure 6 depicts the hourly hot water use for each end-use as a fraction of total end-use [24]. Based upon Table 3 and Figure 6, the hourly hot water flow rate (L hot ) for a residential dwelling can be calculated as [24] where L end−use,i and T end−use are water consumption and temperature for ith end-use. T hot is the hot water supply temperature and T mains is the cold water supply temperature which depends upon ambient temperature T amb . The temperature of the drain water T grey ( • C) can be estimated as [24] T grey = T mains + (1 − 0.01η V )L hot,grey L grey where η T and η V are the loss coefficient of hot water flow rate (%) and (%) temperature respectively. L grey and L hot,grey are the flow rate of grey water and hot water which turns into grey water respectively. Another way to model wastewater temperature is based upon measurements and correlation analysis. For example, Wong et al. [17] in their experimental study measured the incoming cold water supply (T 0 ), shower head temperature (T 2 ), shower drain water (T 3 ) and ambient temperature (T a ) for different months of the year; and established a numerical model using correlation analysis to represent cold water supply temperature and a temperature drop of shower water from head to the drain in terms of outdoor air temperature. (4)
Sewer Pipe Scale
In sewer pipe systems, wastewater flow and temperature depend upon upstream wastewater discharge into sewers from different buildings. It also depends on ambient air and ground temperatures and the water volume already flowing in the sewer, sewer size, among other factors. Cipolla and Maglionico [49] measured wastewater flow in a sewer system in Bologna, Italy, at five different locations (conduits). The wastewater flow varied from 10 L/s to 1700 L/s depending upon the measurement location with a differing number of inhabitants in the catchment. Kretschmer et al. [85] analyzed spatial and temporal variation in the wastewater flow in a sewer system in a valley in Austria with flow variation of around 5-60 L/s depending upon the month and measurement location.
In the case of wastewater temperature, Table 4 shows the sewer pipe wastewater temperature range for various locations measured in different studies. These studies help to provide the range of wastewater temperature in sewer systems. [54] Melbourne, Australia Jan 2012-Jun 2012 13.1-21.1 Wu et al. [87] Harbin, China -12-20 Abdel-Aal et al. [88] Antwerp, Belgium Feb 2012-Jan 2013 7-22 Pramminger et al. [54] Melbourne, Australia Jan 2012-Jun 2012 13.1-21.1 In sewer pipes, spatio-temporal analysis and modelling of thermal dynamics of wastewater are essential. This modelling can be applied to assess different potential sites for WWHR installation. Another application of such a model can be to understand the effect of heat extraction from wastewater on the downstream wastewater temperature. For example, it is sometimes required for wastewater to have a specific minimum temperature before it enters the WWTP, 10 • C for Switzerland. A lower temperature of wastewater than the minimum requirement can influence the process of nitrification in WWTPs. Countermeasures need to be implemented to handle this effect at the planning stage. Lower temperatures in sewer pipes may also influence the formation of fatbergs, and thus modelling the impact of heat extract on this aspect of sewer operation is crucial.
The following models exist in literature that model thermal dynamics of wastewater in sewer systems.
Alligation Alternate
Alligation alternate is a relatively simple method for modelling wastewater temperature dynamics [89,90]. In this method, the temperature of two mixing wastewater fluids (Figure 7) with flow rate and temperature of Q A , T A and Q B , T B is estimated as follows: In the context of modelling the wastewater thermal dynamics, the two flows may not be separate flows mixing at one point, rather two points within the sewer pipe system; for example-one point is the point of heat extraction, and the other point is the inlet of a WWTP. The method does not require measurements of many parameters, only wastewater temperature and discharge. However, this method does not consider the heat exchange processes with in-sewer air, surrounding soil and sewer pipe walls, which is the main reason for the model's low accuracy. Kretschmer et al. [85] used this approach to analyze wastewater temperature evolution and WWHR potential in a sewer system (Austria) considering the impact on inflow temperature WWTPs.
TEMPEST
TEMPEST is a model developed by Dürrenmatt and Wanner [91], which estimates the dynamics and longitudinal spatial profiles of wastewater in sewer systems. It is based upon heat and mass balance in sewer systems [92]. TEMPEST models a sewer system as two basic elements conduits and nodes. Conduits are modelled by 1-D balance equations. Nodes are introduced to represent discontinuities in the sewer line due to lateral inflows, changes in pipe geometry, material properties or surrounding soil properties; and are modelled using continuity equations.
The case study by Dürrenmatt and Wanner [93] demonstrated using TEMPEST that the heat transfer between wastewater and the surrounding soil was the most significant heat transfer process.
The study by Ali and Gillich [94] used TEMPEST to estimate heat recovery potential at a sewer site in London with lateral inflow mixing. Elías-Maxil et al. [95] presented a simplified parsimonious model based upon TEMPEST, considering only the heat transfer from water to the surroundings. A case study based upon the previous model was presented by Hoffman et al. in [96]. The study concluded that in un-steady conditions, the model was more than twice as accurate as TEMPEST, which was due to consideration of the hydraulic influence of maintenance holes, other empty space and pump regime in modelling of wastewater flow.
Sitzenfrei et al. [97] used TEMPEST to analyze the interaction of decentralized (building level) and centralized (sewer pipe) WWHR systems. The study concluded that the performance of centralized heat recovery systems decrease up to 40% when all the dwellings are equipped with decentralized WWHR systems.
Abdel Aal et al. Model
Abdel Aal et al. [98] proposed a simplified model for wastewater temperature dynamics realizing that many input parameters in TEMPEST have insignificant effects on wastewater temperature evolution. The model included an energy balance along the pipe and estimation of heat transfer coefficients and assumed that the temperature variation of wastewater is caused by heat losses to in-sewer air and the surrounding soil. Using this model, Abdel Aal et al. showed that [98], the in-sewer air temperature has the most influence on wastewater temperature dynamics followed by the surrounding soil temperature, contrary to the TEMPEST model [98]. The sewer pipe was modeled as discrete cross-sections of length ∇L.
The temperature evolution along the sewer pipe is given in Equation (6), where the parameters involved in the model are described in Table 5 T where (6) is used sequentially to find the wastewater temperature at nodes (T j+1 , T j+2 ...T j+n ) along the sewer line starting from the upstream temperature T j .
Saagi et al. [99] used this model for analyzing the sewer system in two Swedish cities (Linköping, Malmö) with maximum prediction error ranging from 0.7-0.9 • C.
Extending on the previous model, Abdel aal et al. [100] published a recent study to analyze the impact of the in-sewer air velocity profile, close to the wastewater surface, on the heat transfer processes and proposed an improved method to estimate a new heat transfer coefficient by employing a dimensionless calibrating factor.
Other Measurements Based Approaches
In contrast to intricate heat transfer models, measurement based approach relies upon historical measurements and mathematical models to capture the relationship between relevant input and output parameters. Various measurements such as wastewater temperature, soil temperature, ambient air temperature, wastewater discharge and so forth, have been taken over a certain period of time; and relationship were established between these variables through mathematical tools like correlation analysis. An example of such an approach can be seen in the study by Escalas-Cañellas et al. [101]. The authors used a time series modelling method, where future wastewater temperature is predicted based upon historical temperature, mean daily ambient temperature and rainfall. A modelling error of 0.5 • C (RMSE) between predicted and the measured temperature was observed.
Abdel Aal et al. [102] used another approach, where sewer wastewater temperature was modelled using an Abductory Inductive Mechanism (AIM), a supervised learning technique. Two parameters, upstream wastewater temperature and downstream in-sewer air temperature, were used as inputs. The study carried out a comparison with the model developed by Abdel et al. [98] and showed that the proposed AIM estimates the wastewater temperature with higher accuracy.
In a recent study, Golzar et al. [103] used an artificial neural network to forecast the influent wastewater temperature of a WWTP. The model considered ambient temperature, building effluent temperature and flowrate, stormwater flowrate, infiltration flowrate, the hour of the day, and the day of the year as the input parameters.
In conclusion, while TEMPEST uses a comprehensive modelling approach taking various heat transfer processes, other more simplified models can capture the temperature dynamics with acceptable accuracy. More experimental studies could help to further validate these approaches.
Life Cycle Environmental Assessment
It is evident that WWHR leads to a reduction in GHG emissions by lowering primary energy usage. However, to analyze the overall sustainability of WWHR, it is vital to consider the full life-cycle environmental assessment (LCA) of the technology. Typically, researchers emphasize upon energy savings of WWHR; some have examined the GHG emission savings also. However, the LCA of the WWHR technologies has not gained considerable attention from the research community so far, and the literature associated is limited.
Ip et al. [104] presented a case study focusing upon LCA of shower water heat recovery in a sports-facility. The study's goal was to perform an LCA of wastewater heat exchangers (WWHXs) installed in the facility compared against PVC-u pipe with no heat recovery. The results showed that the lifetime GHG emission (kg CO 2 -e) of WWHX was five times more than the PVC-u pipe. However, the reduction in GHG emission during the operational stage of WWHX indicated an emission payback period of 0.55-10.02 years, depending upon the usage.
A study by Schestak et al. [105] analyzed the sustainability of WWHR in a commercial kitchen in North Wales, UK. The study employed LCA to determine the impact of heat recovery with a concentric double-walled pipe heat exchanger and associated pipes and fittings. The study further explored the possibilities of using recycled copper and polypropylene-graphite instead of copper. The results demonstrated that GHG emissions of the heat exchanger ranged from 16 to 87 kgCO 2 e. The heat exchanger with the com-bination of recycled copper (35%) and polymer material was concluded to be the most environmentally friendly option that is currently available in the market.
Impact on Water Treatment Process
In wastewater treatment processes, temperature plays an important role. The rate of biological and chemical reactions in some elements of wastewater treatment strongly depend upon temperature [106]. Many existing studies in literature solely focus on modelling temperature in activated sludge basins [107][108][109] reflecting its importance. Recovering heat from wastewater in sewer systems may lead to a reduction in influent temperature at WWTPs. This is particularly important if heat recovery upstream is near the WWTP since, for heat recovery at a long-distance upstream, the sewage temperature may regain heat again from the surrounding air and soil. The decrement ∆T in wastewater temperature can be calculated as follows where Q is the amount recovered heat per unit time,V, ρ and c p are the volumetric flow rate, density and the specific heat capacity of wastewater. The reduction in the temperature of influent wastewater can negatively impact the nitrification/denitrification capacity of WWTP. Wanner et al. [110] specifically investigated the effect of heat recovery on nitrification and nitrogen removal for a WWTP in Zurich, Switzerland. The influent and effluent temperatures were measured during both dry and wet conditions. Wanner et al. [110] argued that a temporary reduction (over a couple of hours) in the temperature of wastewater did not affect the nitrification capacity due to long hydraulic retention time in activated sludge tanks and the secondary clarifiers. The authors also concluded that a long-lasting reduction of 1 • C in wastewater temperature causes a 10% reduction in nitrifying bacteria. To deal with this change, 10% of aerobic sludge retention would have to be increased.
Lotti et al. [111] reported the effects of temperature on growth of anaerobic-ammoniumoxidation (Anammox) bacteria activity. The study concluded that the anammox activity could not be effectively described with the Arrhenius equation when exploring the lowtemperature range (<15 • C).
Brehar et al. [112] presented a case study of a municipal WWTP in Romania to investigate the effect of influent temperature on the wastewater treatment process. The study observed increased microbial activity, escalated nitrification and denitrification, and decreased nitrate, nitrite and ammonia concentrations at higher influent temperatures. Overall, the study concluded that in terms of nitrogen removal from wastewater, a decrease in influent temperature could negatively impact the performance of WWTP.
Abdel et al. [113] presented a laboratory-scale study to investigate the impact of sewer heat recovery on in-sewer processes such as deposition of Fats, Oils and Greases (FOGs) and hydrogen-sulphide(H 2 S) emissions. The study concluded no unique temperature dependency on the rate of FOG deposition in the laboratory set-up used. Regarding the H 2 S, a 40% reduction in H 2 S concentration was observed for wastewater at 5 • C compared to 20 • C. In conclusion, the reduced wastewater temperatures due to WWHR can significantly reduce H 2 S formation. The effect on FOG deposition demands further research.
In general, it is clear from the literature that temperature change can result in impacts on the wastewater treatment process. However, it is unclear whether WWHR at the component or building levels would significantly impact influent temperatures. Further research is required in this regard.
Impact on the Receiving Water Ecology
The variation in the wastewater discharge temperature from WWTPs can have considerable consequences on receiving water bodies' ecology. The reduction in treated wastewater temperature is positive for the biological community of the receiving water [114]. In the case of cooling, the heat input to the wastewater or increase in water body temperature can intensify biological processes leading to an accelerated oxygen depletion, which can negatively impact the water ecology [114].
Legal Frameworks
Some governments have begun to develop legal guidelines to regulate thermal energy recovery from wastewater. Notable examples include guidelines from the Canton of Zurich, Switzerland, and the German Association of Water, Wastewater and Waste [41,90]. The Swiss guidelines clarify that the ownership of the thermal resource lies with sewer and WWTP operators. Hence, any exploitation of the resource must be approved by these infrastructure operators. Further, if heat recovery occurs at the outlet of a WWTP, additional approval is required from appropriate regulatory authorities. This ensures that the size of any water bodies receiving wastewater discharge is reasonable and have no excessive thermal loading due to any heat recovery. For in-sewer heat recovery, both guidelines stress the importance of maintaining a minimum wastewater temperature to ensure nitrogen removal at WWTPs. The German guidelines also require that the hydraulic function of sewers is sustained to ensure no excess solids accumulation occurs when heat exchangers are installed. On the other hand, many guidelines set a criteria for maximum temperature for wastewater disposal also [115]. If the wastewater is released at a high temperature, which is the case for many utility operations and industries, WWHR can bring potential benefit by recovering energy as well as maintaining the receiving water conditions.
In terms of the ownership of larger heat recovery schemes, the development of an energy utility by the City of Vancouver to distribute heat recovered from wastewater provides a model on how this can be achieved [116]. Thus far, the work undertaken in regulating the recovery and distribution of heat recovered from wastewater provides a solid foundation for its broader implementation.
Concluding Remarks and Future Directions
Wastewater contains a considerable amount of thermal energy, which can be recovered at different points in the water cycle and utilized to reduce heating demand. Thermal energy is extracted using either heat exchangers or a combination of heat exchangers and heat pumps. Some key points that can be concluded are
•
Heat recovery from shower water using a heat exchanger can be an efficient and economically viable option. Vertical heat exchangers have large space requirements, and retrofitting of WWHR system can increase the investment costs. • At the domestic building level, due to low quantities of wastewater flow and high economic costs, heat pumps are not a viable option for heat recovery at the current prices of alternative heating sources. It is feasible to use a wastewater source heat pump at properties with higher volumes of wastewater discharge, such as public showers, gymnasium, sports centre, commercial kitchen, apartment complexes, and so forth. • Further research should be carried out with a focus on policy and decision making to improve the economic competitiveness of WWHR systems at the domestic building level. • At the sewer pipe level, wastewater flow is in abundant quantity. The temperature varies from 10 to 25 • C throughout the year with low daily variation, which makes sewer water an ideal low-grade heat source for heat pumps. The main disadvantage, in this case, is the fouling of heat exchangers, which can reduce the efficiency of the heat recovery system, thus requiring regular maintenance. • Economic savings and payback period for WWHR at sewer pipe level can depend upon many factors, including current electricity prices, more prolonged period usage of WWHR system and the cost of traditional fuel sources. • Research studies considering the energy, economic and environmental aspects of WWHR collectively are still limited in the literature.
•
The surrounding soil of a sewer pipe and in-sewer air are the two major sources of heat loss for sewer wastewater. • Downstream of WWTPs, the wastewater temperature is relatively stable and can be cooled down to much lower levels. However, there can be higher heat transmission losses in this case due to the often distant location of WWTPs from the consumers. • Along with the positive impacts such as reduction of primary energy usage and GHG emissions, WWHR from sewer wastewater could negatively impact the nitrification capacity of WWTP, leading to higher ammonium concentration in the effluent water of WWTP. However, the upstream impact of WWHR on the downstream treatment require further investigation to fully quantify what this potential effect might be.
In addition to the above key findings of the present work, some future suggestions and recommendations can be summarized as follows: • In order to encourage heat recovery at the component and the building level, various countries can introduce WWHR in their respective building codes and guidelines aimed at improving the energy efficiency of existing and new buildings. • At present, the conventional technologies are prevalent over such sustainable alternatives due to the low prices of fuel sources. More studies with direct attention to the economic analysis of small and large scale WWHR should be performed to clearly highlight the advantage of WWHR over conventional technologies in the future with the rising cost of traditional fuel sources. • The non-residential buildings that generate a large amount of wastewater, such as launderette, hotels, and restaurants, food processing industry hold significant potential for WWHR. More research should be dedicated in this direction. • The impacts of separating wastewater at the source in residential buildings on WWHR can be considered in a future study expanding on previous work from Ni et al. [24]. As discussed in Section 5.1, heat in residential wastewater is predominantly embedded in the greywater component from bathrooms, washing machines, and kitchens. Separating this wastewater at the source in a building could impact the feasibility of WWHR by concentrating the heat available alongside the benefits of simpler treatment of the relatively clean greywater for potential reuse. • The decentralization of wastewater treatment can be a method to enhance local water recycling, reduce reliance on extensive sewer networks, and reduce the intensity of environmental impacts from large, centralized WWTPs. The interactions of wastewater decentralization and WWHR could be an interesting concept to explore with lower wastewater flows available in small decentralized WWTPs but at potentially higher temperatures due to reduced distances in sewers. • Another interesting idea would be to explore the integration of WWHR into district heating as a decentralized heating source.
Overall, wastewater is an important source of clean thermal energy with significant potential to improve the energy infrastructure's efficiency and reduce GHG emissions. It should be given more attention from the research community, policymakers and other stakeholder committed to achieving climate neutrality. | 14,627.2 | 2021-04-30T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Defect scattering can lead to enhanced phonon transport at nanoscale
Defect scattering is well known to suppress thermal transport. In this study, however, we perform both molecular dynamics and Boltzmann transport equation calculations, to demonstrate that introducing defect scattering in nanoscale heating zone could surprisingly enhance thermal conductance of the system by up to 75%. We further reveal that the heating zone without defects yields directional nonequilibrium with overpopulated oblique-propagating phonons which suppress thermal transport, while introducing defects redirect phonons randomly to restore directional equilibrium, thereby enhancing thermal conductance. We demonstrate that defect scattering can enable such thermal transport enhancement in a wide range of temperatures, materials, and sizes, and offer an unconventional strategy for enhancing thermal transport via the manipulation of phonon directional nonequilibrium.
The authors have conducted extensive modeling to prove their arguments and the referee feels that the disclosed underlying physics is interesting.However, the referee is not convinced that the observed the phenomena are not due to numerical artifacts in the simulations.Therefore, the manuscript has to be reconsidered after the authors address the following concerns.
1.The authors claimed that "since both MD and BTE can show that introducing defect scattering can lead to enhanced thermal transport, it is clearly not caused by simulation artifacts."The referee disagrees with this argument because in the BTE calculation, the boundary conditions are set to match the transport mechanism in the MD.As such, it is not a surprise that the MD and BTE yielded similar trends.
2. The referee has the concern that the effect is actually induced by simulation artifacts, i.e., the way that heat is added to the heating zone.In fact, the authors stated "When the scattering is rare (i.e., in the ballistic regime) inside the uniform volumetric heating zone (Fig. 4d), the phonon mode propagating oblique to the z direction (mode 2) travels a much longer distance in the heating zone than the mode propagating along the z direction (mode 1), and thus receives a much larger amount of energy."The referee feels that this explanation exactly suggests that the effect is indeed due to how energy is added to the heating zone.
3. The authors set the reflection at the upper boundary as specular, which helps maintain the direction of phonons from the normal direction of the boundary.Based on the referee's understanding of the manuscript, if the upper boundary is set as fully diffuse reflection/scattering, the effect will be significantly diminished, which would mean that indeed the phenomenon is a simulation artifact.Similarly, in MD, if the interface between the fixed atoms and the heating zone is set to be rough, the effects should be less significant.4. The definition of deltaT_heat_bar in the first line of Page 9 is confusing.5.In the derivation of thermal conductance from MD, the authors adopted an equation that includes the effects of the heat source and sink.Therefore, the derived the conductance includes the boundary effect.What if the authors adopt the conductance as the ratio of heat flux over the temperature gradient in the substrate?6.Since there is no scattering in the substrate in the BTE modeling, the definition of deltaT_sub is confusing.It is a temperature jump at the lower boundary instead of a temperature drop across the substrate.7.Under special conditions, defects can actually lead to enhanced thermal conductance as demonstrated by quite a few experiments.The authors should discuss their results with respect to those experiment results and provide perspectives on the different transport mechanisms.
Reviewer #3 (Remarks to the Author):
In this paper, the authors show that the defect-free heating zone overpopulates obliquepropagating phonons, while introducing defects would redirect phonons randomly to restore directional equilibrium.They demonstrate that defect scattering can enable such thermal transport enhancement in a wide range of temperatures, and offer an unconventional strategy for enhancing thermal transport.The results are interesting and important.
There are some issues for authors to address: i): A cross-sectional area of 8×8 is not large, whether there is an effect of phonon-boundary scattering on thermal conductivity needs to be further demonstrated.ii): How do phonons with different vibrational frequencies contribute differently to the thermal conductivity?The PDOS of phonons with different doping concentrations are useful to complement the phonon transport mechanism.iii): In Fig. 5c, why does the enhancement of G first increase and then decrease as the substrate length increases?
The authors sincerely appreciate the constructive comments concerning our manuscript from all reviewers.Based on these comments, we have made corresponding revisions carefully that have been highlighted in the revised manuscript and supplementary materials using blue font.We believe the revised version has adequately addressed all of the comments.The point-by-point responses to the comments are listed below.
Response to Reviewer #1:
This manuscript demonstrates a novel phenomenon that defect scattering in the heating zone can counterintuitively enhance thermal transport at nanoscale, which breaks the common understanding that defect scattering always impedes thermal transport.
This novel finding is calculated by using both molecular dynamics and phonon Boltzmann transport equations, which seems to be rigorously done and the quantitative agreements comparing with different approaches make the results solid and convincing.
The counterintuitive phenomenon is well-explained by the phonon direction nonequilibrium mechanism.Moreover, the widespread existence of this phenomenon is reported across different temperatures, materials, and sizes brings a possible solution for further overcoming the heat dissipation bottleneck in electronics.For these reasons, I recommend this intriguing paper for publication in Nature Communications after the following issues are addressed.
Response: We are grateful to the reviewer for the positive evaluation.
(i) The title needs to be modified to "Defect scattering can lead to enhanced phonon transport at nanoscale", which would better convey the contribution of this study.
Response: Thanks for the suggestion.We agree that the new title would better convey the contribution of this study.We have modified the title of this manuscript to "Defect scattering can lead to enhanced phonon transport at nanoscale".
(ii) The results regarding various materials and spectral nonequilibrium in supplementary materials are significant and can be moved to the main text if space is not limited.
Response: Thanks for the suggestion.We have modified and moved the results regarding various materials and spectral nonequilibrium to the updated manuscript (Pages 14-16): "We also investigate the thermal conductance of SiC and GaN systems, which serve as channel materials for power transistors [40,41], as illustrated in Fig. 6.In SiC systems, we introduce defect scattering by randomly replacing C atoms with 14 C atoms.
For GaN systems, defect scattering is induced by randomly replacing Ga atoms with 71 Ga atoms.Additionally, we explore the impact of 24 Mg impurities in GaN and 10 B impurities in SiC, as detailed in Supplementary Notes S5 and S6.Phonon properties are obtained from first-principles calculations (see Supplementary Notes S5 and S6).Our investigation encompasses varying temperatures (Fig. 6a and d), substrate lengths (Fig. 6b and e), and heating zone lengths (Fig. 6c and f).We observe thermal conductance enhancement by introducing defect scattering across different temperatures and sizes."Our analysis so far has assumed that mode-level heat generation is proportional to the modal heat capacity (spectral equilibrium).However, we also investigate the thermal conductance with considering the over-population of optical phonons (spectral nonequilibrium).We conduct a rigorous electron-phonon coupling calculation for Si to determine the mode-level heat generation (see Supplementary Note S7).As shown in Fig. 7a, compared with cases for heat generation with spectral equilibrium, we actually find more significant thermal conductance enhancement by introducing defect scattering for heat generation with spectral nonequilibrium.At 100 K, the maximum thermal conductance enhancement reaches 338%.The underlying mechanism is that defect scattering not only manipulates the directional nonequilibrium but also the spectral nonequilibrium (see Supplementary Note S7).(iii) The results with regard to the over-population of optical phonons are interesting, as the inclusion of defect scattering leads to a more significant increase in thermal conductance.This is based on an assumption that optical phonons obtain all energy.
Recent electron-phonon coupling calculations show that some acoustic phonons can still obtain energy from electrons.It would be better if the authors discuss how significant this effect can be if the optical phonons obtain less energy (for example, according to rigorous electron-phonon coupling calculation).
Response: Thank for the reviewer's insightful suggestion.We have conducted thorough electron-phonon coupling calculations for silicon (Si) to obtain a more accurate modelevel heat generation.The rigorous electron-phonon coupling calculations are implemented in the electron-phonon Wannier (EPW) package [1].The electron-phonon coupling matrix elements are firstly calculated on the coarse meshes and are then interpolated to 100×100×100 k-point and 60×60×60 q-point meshes to calculate the electron-phonon energy generation rate with our modified codes.The calculated modelevel heat generation is shown in Fig. R1.There are several peaks of the heat generation, which means that electrons tend to transfer energy to specific phonon modes, especially for some optical phonon modes.Meanwhile, some acoustic phonon modes also have received energy from electrons.
As shown in Fig. R2, when compared to the scenario where optical phonons receive all the energy, our rigorous electron-phonon coupling analysis indicates a slightly reduced over-population of optical phonons, resulting in a minor decrease in thermal conductance enhancement through defect scattering.However, with this spectral nonequilibrium, the inclusion of defect scattering can still lead to a significantly more pronounced increase in thermal conductance compared to scenarios where heat generation follows spectral equilibrium.We also recognize the importance of rigorously calculating electron-phonon coupling to achieve a more accurate heat generation among different phonon modes.
Consequently, we have incorporated these results into the updated manuscript (Page 16: "Our analysis so far has assumed that mode-level heat generation is proportional to the modal heat capacity (spectral equilibrium).However, we also investigate the thermal conductance with considering the over-population of optical phonons (spectral nonequilibrium).We conduct a rigorous electron-phonon coupling calculation for Si to determine the mode-level heat generation (see Supplementary Note S7).As shown in Fig. 7a, compared with cases for heat generation with spectral equilibrium, we actually find more significant thermal conductance enhancement by introducing defect scattering for heat generation with spectral nonequilibrium.At 100 K, the maximum thermal conductance enhancement reaches 338%.The underlying mechanism is that defect scattering not only manipulates the directional nonequilibrium but also the spectral nonequilibrium (see Supplementary Note S7)." In the updated Supplementary Materials: "Recent studies reveal that in transistors or Raman measurements, the heat generation due to electron-phonon interactions is in spectral nonequilibrium, i.e., optical phonons (phonons with high frequency) tend to receive much more energy than acoustic phonons (phonons with low frequency) [8][9][10][11].To study how this spectral nonequilibrium affects thermal conductance enhancement by defect scattering, we perform mode-level phonon BTE calculations with first-principles phonon properties for Si systems with a 10 nm heating zone and a 40 nm substrate.We have also conducted rigorous electron-phonon coupling calculations using the electron-phonon Wannier (EPW) package [12].The electron-phonon coupling matrix elements are firstly calculated on the coarse meshes and are then interpolated to 100×100×100 k-point and 60×60×60 q-point meshes to calculate the electron-phonon energy generation rate with our modified codes.The calculated mode-level heat generation is shown in Fig. S10a.
There are several peaks of the heat generation, which means that electrons tend to transfer energy to specific phonon modes, especially for some optical phonon modes.
Meanwhile, some acoustic phonon modes also have received energy from electrons.
To quantify phonon spectral nonequilibrium, the spectral phonon temperature is usually adopted [9][10][11]13].The definition of the spectral phonon temperature is .When the phonons with different frequencies are in equilibrium, they have the same spectral phonon temperature.In the ballistic regime, since optical phonons receive all the energy and have poor thermal transport efficiency [9,14], optical phonons have very high temperatures that are much higher than acoustic phonons (as shown in Fig. S10b).When defect scattering is induced, the spectral nonequilibrium among phonons is largely reduced (as shown in Fig. S10b).Since acoustic phonons have higher thermal transport efficiency [9,14], the temperature is lower when the spectral nonequilibrium is smaller with a fixed heat flux.(iv) There is a mistake in Fig. 4c where the center of the red arrow does not align with the gray line, which should be corrected.
Response: Thanks for the comment.We have corrected this mistake in Fig. 4c.
(v) In Wurtzite GaN and 4H-SiC systems, inducing defect scattering seems to monotonously increase the thermal conductance.If one further increases the defect scattering in Wurtzite GaN and 4H-SiC systems, will the thermal conductance decrease like that in the silicon system?It is better if some discussions are provided.
Response: Thanks for the comment.According to the reviewer's suggestion, we have modeled wurtzite GaN and 4H-SiC systems with more defect scattering.Specifically, as the common dopant for GaN, 24 Mg atoms are introduced as defects.Similarly, 10 B atoms are introduced in 4H-SiC.As shown in Fig. R3 and Fig. R4, it can be seen that the thermal conductance has decreased at high defect concentrations, similar to that in the silicon system."Additionally, we explore the impact of 24 Mg impurities in GaN and 10 B impurities in SiC, as detailed in Supplementary Notes S5 and S6.Phonon properties are obtained from first-principles calculations (see Supplementary Notes S5 and S6).Our investigation encompasses varying temperatures (Fig. 6a and d), substrate lengths (Fig. 6b and e), and heating zone lengths (Fig. 6c and f).We observe thermal conductance enhancement by introducing defect scattering across different temperatures and sizes." In the updated Supplementary Materials: "Wurtzite GaN is widely used in power electronics [6].We investigate thermal conductance enhancement in GaN systems with defects.We first extract phonon properties from first-principles calculations with the quantum phonon population (Bose-Einstein distribution).A supercell of 4 4 4 and the fifth nearest atom neighbor is considered to obtain the third-order anharmonic interatomic force constants.The thermal conductivity of GaN is calculated based on the single-mode relaxation time approximation method.We use 40 40 40 q-points for all temperatures to sample the Brillouin zone.The bulk thermal conductivity of GaN for different temperatures is shown in Fig. S8a.Our results agree well with those in the literature [2].To estimate the scattering from defects, we adopt the Tamura model [7].We adopt the mode-level phonon BTE calculations to investigate the thermal conductance enhancement by doping 24 Mg atoms to replace Ga atoms (system in Fig. 1(a)) with different temperatures as shown in Fig. S8b.In the main text, we introduced doping with 71 Ga isotopes and observed a continuous and monotonic increase in thermal conductance.This behavior can be attributed to the small mass difference between 71 Ga and 69 Ga isotopes.In contrast, a significant mass difference exists between 24 Mg and Ga atoms, leading to more pronounced defect scattering induced by 24 Mg compared to 71 Ga isotopes.Consequently, the thermal conductance demonstrates an initial increase followed by a subsequent decrease, as illustrated in Figure S8b.4H-SiC is also widely used in power electronics [6].We investigate the thermal conductance enhancement in SiC systems with C 14 isotopes.We first extract phonon properties from first-principles calculations with the quantum phonon population (Bose-Einstein distribution).A supercell of 4 4 2 and the fourth nearest atom neighbor is considered to obtain the third-order anharmonic interatomic force constants.
The thermal conductivity of SiC is calculated based on the single-mode relaxation time approximation method.We use 23 23 7 q-points for all temperatures to sample the Brillouin zone, which is consistent with the reference [3].The bulk thermal conductivity of SiC for different temperatures is shown in Fig. S9a.Our results agree well with those in the literature [3].To estimate the scattering from defects, we adopt the Tamura model [7].We adopt the mode-level phonon BTE calculations to investigate the thermal conductance enhancement by doping 10 B atoms to replace Si atoms (system in Fig. 1(a)) with different temperatures as shown in Fig. S9b.In the main text, we introduced doping with 14 C isotopes and observed a continuous and monotonic increase in thermal conductance.This behavior can be attributed to the small mass difference between 14 C and 12 C isotopes.In contrast, a significant mass difference exists between 10 B and Si atoms, leading to more pronounced defect scattering.Consequently, the thermal conductance demonstrates an initial increase followed by a subsequent decrease, as illustrated in Figure S9b.This manuscript reports on a study of the effects of defect scattering in the heating zone on the calculated thermal conductance using molecular dynamics simulation and numerically solving the Boltzmann transport equation.Contrary to the common expectation of enhanced resistance from defects, the results here show that defects in the heat zone could increase the derived thermal conductance.The authors attributed the enhanced thermal conductance to the defect scattering induced spatial redistribution of phonon energy and introduced the concept of directional non-equilibrium.
The authors have conducted extensive modeling to prove their arguments and the referee feels that the disclosed underlying physics is interesting.However, the referee is not convinced that the observed the phenomena are not due to numerical artifacts in the simulations.Therefore, the manuscript has to be reconsidered after the authors address the following concerns.
Response: We sincerely appreciate the reviewer's thoughtful comments of our manuscript and extend our gratitude for the valuable feedback provided.The reviewer's interest in the extensive modeling supporting our findings and positive assessment of the underlying physics is greatly acknowledged.In response to the specific concerns regarding the potential influence of numerical artifacts on the observed phenomena, we have carefully addressed the issues raised and implemented detailed modifications to the manuscript.Following thorough examination and multiple verifications across different cases, we affirm that the observed phenomena are not a result of numerical artifacts.We believe that these revisions and additions not only address the concerns but also enhance the clarity of our discussion, thereby fortifying the overall reliability of our research.It is important to emphasize that our computations adhere to rigorous and widely accepted standards, and are grounded in realistic physics.Furthermore, we would like to highlight that the observed novel phenomenon shows promise for experimental validation, particularly at low temperatures.The reviewer's insights have been invaluable, and we hope that our efforts in addressing these concerns are satisfactory.
(i) The authors claimed that "since both MD and BTE can show that introducing defect scattering can lead to enhanced thermal transport, it is clearly not caused by simulation artifacts."The referee disagrees with this argument because in the BTE calculation, the boundary conditions are set to match the transport mechanism in the MD.As such, it is not a surprise that the MD and BTE yielded similar trends.
Response: Thanks for the valuable feedback.We appreciate the meticulous attention on our work and would like to address the concerns regarding potential simulation artifacts impacting our reported results.
Regarding the agreement between the two methods, it is essential to note that we did not artificially match the boundaries between BTE and MD.Instead, this agreement naturally arises from the consistent thermal transport mechanisms underlying both methods.MD operates in the real space, whereas BTE employs the phonon space to define boundaries.Our simulations utilized standard settings in BTE, including adiabatic boundary and thermalizing boundary, as well as in MD including fixed atoms, and commonly used MD thermostats such as Langevin and Nose-Hoover chain thermostats.No special modifications were introduced in either method.
It should be noted that achieving quantitative agreement between MD and BTE is a difficult task that extends beyond boundary conditions.It necessitates a comprehensive understanding of thermal reservoirs, boundary conditions, accurate phonon property calculations, and precise computation processes for both methods.
Since MD works with atoms while BTE works with phonons, the alignment of MD and BTE has been a long-standing issue in nanoscale thermal simulation [2,3].Our recent research has also highlighted these challenges [4].In this study, we have successfully achieved quantitative alignment between the two methods (as illustrated in Fig. 5 in the manuscript), providing robust evidence for the accuracy of our calculations and supporting the credibility of our findings.
Additionally, we explored alternative boundary conditions, such as eliminating fixed atoms in MD and adopting periodic boundaries in a symmetrical system, as shown in Fig. R5b.Interestingly, even without fixed atoms, we observed an increase in thermal conductance due to defect scattering (Fig. R6).This phenomenon still exists when studying rough surfaces (as addressed in concern iii).The observed phenomena persisted under these different systems, giving us more confidence on the validity of our findings instead of numerical artifacts.
R5.
To enhance clarity and avoid potential misunderstandings, we have also revised the respective sentences in the updated manuscript (Page 3 and Page 8): "We also try a system without fixed atoms by using periodic boundaries and find the negligible influence of fixed atoms (see Supplementary Note S1)." "Both MD and BTE show that introducing defect scattering can lead to enhanced thermal transport.As will be shown later, with consideration of the mode-level phonon properties, the two methods quantitatively match, providing robust evidence for the accuracy of our calculations and supporting the validity of our findings." In the updated Supplementary Materials: "We also eliminate fixed atoms in MD simulations and adopt periodic boundaries, as shown in Fig. S2.Without fixed atoms, there is also an increase in thermal conductance due to defect scattering (Fig. S3), indicating that fixed atoms have a negligible impact on the results.(ii) The referee has the concern that the effect is actually induced by simulation artifacts, i.e., the way that heat is added to the heating zone.In fact, the authors stated "When the scattering is rare (i.e., in the ballistic regime) inside the uniform volumetric heating zone (Fig. 4d), the phonon mode propagating oblique to the z direction (mode 2) travels a much longer distance in the heating zone than the mode propagating along the z direction (mode 1), and thus receives a much larger amount of energy."The referee feels that this explanation exactly suggests that the effect is indeed due to how energy is added to the heating zone.
Response: Thanks for the comments.We appreciate the attention to detail and would like to address the concerns regarding the potential influence of simulation artifacts on our reported results.It is important to clarify that the explanation presented in our manuscript is grounded in our understanding of the underlying phenomena and is not contingent on the simulation settings.In our simulations, we employed a standard thermal reservoir for the heat source in MD known as the Nose-Hoover thermostat.This choice corresponds to a scenario of uniform volumetric heating and aligns with established practices in the field, as verified by both Boltzmann Transport Equation (BTE) and experimental validations [5,6].It is essential to remark that artificially introducing directional phonon heating is technically impossible in MD simulations due to its atomic vibration nature.Instead, MD only permits control over the average temperature in the heating zone by scaling the atomic velocities.In the BTE calculations, we utilized a widely accepted volumetric heating method [7].The resulting directional heating observed from both methods is an outcome deeply rooted in the principles of ballistic transport, and not a prescribed condition.We explain it by the difference in traveling distances of different phonon modes.As such, we believe that the observed effect is a legitimate physical phenomenon under consideration, free from numerical artifacts.
(iii) The authors set the reflection at the upper boundary as specular, which helps maintain the direction of phonons from the normal direction of the boundary.Based on the referee's understanding of the manuscript, if the upper boundary is set as fully diffuse reflection/scattering, the effect will be significantly diminished, which would mean that indeed the phenomenon is a simulation artifact.Similarly, in MD, if the interface between the fixed atoms and the heating zone is set to be rough, the effects should be less significant.
Response: Thanks for the insightful comments and suggestions regarding the simulation setup in our manuscript.In response to the recommendation, we conducted additional simulations by modifying the upper boundary condition from specular to diffuse reflection in BTE as shown in Fig. R7b.The results show a slight reduction in thermal conductance enhancement compared to cases with specular boundaries (Fig. R7c).The reduction is consistent with the reviewer's point.However, a distinct thermal conductance increase through introduced defects is still observed.This indicates that diffuse boundary scattering does not remove directional nonequilibrium to a great degree in the defect-free case, and the effect still well persists.
It should also be noted that both specular and diffuse reflection boundaries could be realized in experiments [8,9].Consequently, our research provides guidance on maximizing the enhancement of thermal conductance through defect scattering in experiments, favoring the use of specular boundaries."In all the aforementioned discussions, we consistently applied a specular reflection boundary condition to the upper boundary across all systems.To show the robustness of our findings, we also explored systems with diffuse reflection boundary conditions (see Methods).As illustrated in Fig. 7b, a decrease in thermal conductance enhancement is observed when compared to cases with specular boundaries.This is expected as diffuse boundary conditions are similar to defect scattering and tend to randomize the phonon directions, thereby contributing to the reduction of directional nonequilibrium.It is noteworthy that both specular and diffuse reflection boundaries can be experimentally implemented [42,43], and the boundary in reality is most likely partially specular and partially diffuse.Consequently, our research offers valuable insights into optimizing thermal conductance enhancement through impurity scattering in experimental settings, with a preference for the utilization of more specular boundaries.Response: Thanks for the reviewer for bringing attention to the definition of in the first line of Page 9. We appreciate the reviewer's valuable feedback and have clarified its purpose and meaning.In our revised manuscript, we explicitly define as the average temperature rise within the heat source relative to the substrate, with the calculation expressed as .This definition aligns with the average temperature rise within the heat source as defined in MD simulations.We have modified the corresponding part in the updated manuscript (Page 9): " , where denotes the average temperature drop inside the heating zone relative to the heating zone-substrate interface, with the calculation expressed as " (v) In the derivation of thermal conductance from MD, the authors adopted an equation that includes the effects of the heat source and sink.Therefore, the derived the conductance includes the boundary effect.What if the authors adopt the conductance as the ratio of heat flux over the temperature gradient in the substrate?
Response: We appreciate the insightful comment regarding the definition of thermal conductance.In response to the suggestion about adopting the conductance as the ratio of heat flux over the temperature gradient in the substrate, we find this proposal intriguing.At the ballistic transport regime, temperature is not the macroscopic thermodynamic definition and primarily serves as a representation of total internal energy density.Previous studies have widely observed nonlinear temperature distributions even under constant heat flow [10].Particularly in the ballistic limit, there can be heat flow even when there is no temperature gradient.In such systems, the temperature gradient cannot be well-defined.Our recent research also supports this point [4].
The thermal conductance defined in this study is an effective thermal conductance used to characterize systems where better heat dissipation leads to lower temperature rise under the same heating conditions.It aligns with the definition of device thermal resistance used in the field of transistor heat dissipation [11].Moreover, in the ballistic limit, this definition can be unified with the Landauer formula [12].Regarding the reviewer's remark about the inclusion of heat source and sink effects, as shown in Fig. 2 in the manuscript, inducing defects not only results in a temperature decrease within the heating zone but also in the substrate.Meanwhile, it is noteworthy that the inclusion of heat sources constitutes a key innovation in our work.In past studies for MD and BTE, the heat source and sink have often been ignored [13,14].Our research emphasizes the importance of the heat source, showing its significant impact on heat transfer.
To enhance clarity and avoid potential misunderstandings, we also add sentences in the updated manuscript (Page 5): "To quantify the performance of the heat dissipation, we define the thermal conductance for this system, where is the heat flux along the z direction in the substrate and is the average temperature in the heating zone [16].The definition of this thermal conductance is similar to the widely used thermal resistance in the heat dissipation of transistors, representing that, under the same heat generation, the lower the device temperature rise, the smaller the thermal resistance, and the greater the thermal conductance [17][18][19]." (vi) Since there is no scattering in the substrate in the BTE modeling, the definition of deltaT_sub is confusing.It is a temperature jump at the lower boundary instead of a temperature drop across the substrate.
Response: Thanks for the valuable feedback on our manuscript.In response to the suggestion, we have made appropriate revisions in the updated manuscript (Page 9) to distinguish between the temperature drop across the substrate and the temperature jump at the lower boundary: "For these three cases, there is a temperature drop inside the heating zone and another temperature drop from the heating zone-substrate interface to the bottom boundary.Note that the temperature inside the substrate is constant since the scattering inside the substrate is neglected [30].The temperature drop from the heating zonesubstrate interface to the bottom boundary is equal to the temperature jump at the boundary in present cases.We divide the total temperature drop into two parts: the temperature drop inside the heating zone and the temperature drop from the heating zone-substrate interface to the bottom boundary , i.e. temperature jump at the boundary (Fig. 4a)." (vii) Under special conditions, defects can actually lead to enhanced thermal conductance as demonstrated by quite a few experiments.The authors should discuss their results with respect to those experiment results and provide perspectives on the different transport mechanisms.
Response: Thanks for the reviewer's insightful comment.We have noticed that Zhang et al. [15] recently utilized defects to assist phonon transport through kinked nanowires.
Another work published during the peer review process of this study by Liu et al. [16] involves doping oxygen atoms in van der Waals crystal TiS3 nanoribbons to induce lattice contraction and increase Young's modulus, leading to enhanced thermal conductivity.These studies differ in fundamentals from the current study that focused on reducing directional nonequilibrium through defect scattering.Nevertheless, the potential coexistence of these effects provides further viable means for manipulating heat transfer at the micro-and nanoscale.
Furthermore, we believe the observed phenomenon holds promise for experimental validation.Our system has real-world counterparts, where the volumetric heating method is aligned with optical and electrical heating in experiments [17,18].
The heat sink corresponds to the thermal reservoir in the heat bridge method [19,20].
Our discussions on the influence factors also provide guidance on maximizing the enhancement of thermal conductance through defect scattering in experiments.
Currently, spectral phonon nonequilibrium has been experimentally observed, and directional nonequilibrium systems may pose more stringent requirements, necessitating precise control of the heat source.We are actively working on achieving experimental enhancement of thermal conductance through defect scattering under spectral nonequilibrium.
We also add some discussions in the updated manuscript (Pages In this paper, the authors show that the defect-free heating zone overpopulates obliquepropagating phonons, while introducing defects would redirect phonons randomly to restore directional equilibrium.They demonstrate that defect scattering can enable such thermal transport enhancement in a wide range of temperatures, and offer an unconventional strategy for enhancing thermal transport.The results are interesting and important.There are some issues for authors to address: Response: We are grateful to the reviewer for the positive evaluation. (i) A cross-sectional area of 8×8 is not large, whether there is an effect of phononboundary scattering on thermal conductivity needs to be further demonstrated.
Response: Thank for the reviewer's insightful comment regarding the cross-sectional area and the potential effect of phonon-boundary scattering.Since periodic boundary conditions are applied in lateral directions as shown in Fig. 1 in the manuscript, we actually model a one-dimensional system.To check whether a cross-sectional area of 8 ×8 is sufficient, we further increase the cross-sectional area (up to 24×24), as shown in Fig. R8.It can be seen that the results do not change, suggesting that phononboundary scattering does not play a significant role in the current system.We have also added a few sentences in the updated manuscript (Page 3): "After a convergence test (see Supplementary Note S1), a cross-sectional area of -unit cells is considered and periodic boundary conditions are applied in lateral directions." We also added a few sentences in the updated Supplementary Materials: "To check whether a cross-sectional area of 8×8-unit cells is sufficient, we further increase the cross-sectional area (up to 24×24-unit cells), as shown in Fig. S1.It can be seen that the results do not change, suggesting that phonon-boundary scattering does not play a significant role in the current system.We have also added some discussions in the updated Supplementary Materials: "Figure S6 shows the phonon density of state (DOS) of Si with Ge impurities.It shows that the concentration has a minor impact on the DOS, most likely attributed to the low defect concentration.How different phonons contribute to the thermal transport can be analyzed from the phonon scattering rate or the phonon mean free path with different frequencies from first-principles calculations as shown in Fig. S7. and increase the (i.e., the temperature drop inside the heating zone).This is also illustrated in Fig. R11, where Fig. R11a and b show the and at different substrate lengths Lsub for pure silicon system and the system with 0.5% Ge in the heating zone, respectively (Lheat=10 nm and the total heat generation/heat flux is constant).Fig. R12 shows the ratio of decreased and the ratio of increased when comparing these two systems.Since the thermal conductance is given by , it is the significantly decreased from introducing defects in the heating zone that leads to the overall thermal conductance enhancement as shown in Fig. 5c.
In response to the review's comment, there are two competing effects come into play when varying the length of the substrate Lsub.First, as shown by the red line in Fig. R12, when Lsub is large, the decrease of will not be as pronounced as small Lsub cases, due to the fact that the doping defects in a relatively small heating zone become less important to the heat transfer in the entire system.Hence, the thermal conductance enhancement attenuates for large Lsub.Second, as shown by the blue line in Fig. R12, when Lsub is small, the increase of becomes more significant.This may originate from the more important role of the boundary when Lsub is small, which can induce additional ballistic thermal resistance and lead to higher temperature rise [13].As such, the thermal conductance enhancement also attenuates for small Lsub.Overall, these two competing effects determine that the most pronounced thermal conductance enhancement is obtained at a moderate substrate length.Lsub for pure silicon system and the system with 0.5% Ge in the heating zone.
Fig. 6 .
Fig. 6.Thermal conductance of 4H-SiC and wurtzite GaN systems.a Thermal conductance of 4H-SiC systems at different temperatures.b Thermal conductance of 4H-SiC systems at 100 K with a fixed length of the heating zone (10 nm) and different lengths of the substrate.c Thermal conductance of 4H-SiC systems at 100 K with a
Fig. 7 .
Fig. 7. Thermal conductance for heat generation with spectral nonequilibrium and diffusely reflecting boundary (Si systems).a Thermal conductance for heat generation with spectral nonequilibrium.b Thermal conductance when adopting the diffusely reflecting boundary.Si systems with 10 nm heating zone and 40 nm substrate are studied."
Fig. R1
Fig. R1 Phonon dispersion and the mode-level heat generation of Si from rigorous electron-phonon coupling calculation.
Fig
Fig. R2 Thermal conductance for different heat generation (Si systems with 10 nm heating zone and 40 nm substrate) at 300 K.The red dotted line represents calculated results when adopting spectral equilibrium heat generation.The light blue dashed line represents calculated results when assuming only phonons with frequencies larger than 400 cm -1 (optical phonons) receive energy from heat generation.The dark blue solid line represents calculated results when adopting heat generation from rigorous electronphonon coupling calculations.
Fig. S10 | a
Fig. S10 | a Phonon dispersion and the mode-level heat generation of Si from rigorous electron-phonon coupling calculation.b Spectral phonon temperature distribution at the interface between the heating zone and the substrate at 100 K (Si systems with 10 nm heating zone and 40 nm substrate)."
Fig
Fig. R3 Thermal conductance at different temperatures when doping 24 Mg atoms in the GaN.
Fig
Fig. R4 Thermal conductance at different temperatures when doping 10 B atoms in the 4H-SiC.
Fig. S8 |
Fig. S8 | Results of wurtzite GaN systems.a Bulk thermal conductivity of GaN.The symbols correspond to values from the reference [2].b Thermal conductance at different temperatures when doping 24 Mg in GaN.
Fig. S9 |
Fig. S9 | Results for 4H-SiC systems.a Bulk thermal conductivity of SiC.The symbols correspond to values from the reference [3].b Thermal conductance at different temperatures when doping 10 B in SiC."
Fig. R5 a
Fig. R5 a Simulation system studied in MD in the manuscript.b Symmetrical simulation system with periodic boundaries instead of adopting fixed atoms in MD.The substrate is pure Si and the heating zone is Si with Ge impurities occupying random sites.
Fig. S2 | a
Fig. S2 | a Simulation system studied in MD in the manuscript.b Symmetrical simulation system with periodic boundaries instead of adopting fixed atoms in MD.The substrate is pure Si and the heating zone is Si with Ge impurities occupying random sites.
Fig. R7 a
Fig. R7 a Simulation system studied in BTE with a specular reflection boundary in the manuscript.b Simulation system with a diffuse reflection boundary.c Thermal conductance for specular and diffuse reflection boundary shown in a and b (Si systems with 10 nm heating zone and 40 nm substrate) from phonon BTE calculations.
Fig. 7 .
Fig. 7. Thermal conductance for heat generation with spectral nonequilibrium and diffusely reflecting boundary (Si systems).a Thermal conductance for heat generation with spectral nonequilibrium.b Thermal conductance when adopting the diffusely reflecting boundary.Si systems with 10 nm heating zone and 40 nm substrate are studied."In the updated Methods (Page 21): "The diffusely reflecting boundary condition is another type of adiabatic boundary condition, in which the energy of phonon reflected from the boundary is the same along each direction, i.e.,
Fig. R9
Fig. R9 The phonon density of states for Si under different doped Ge concentrations.
Fig. R10
Fig. R10 Phonon mean free path distribution of Si with Ge impurities calculated from first-principles calculations at a, 50 K, b, 100 K, c, 300 K, and d, 400 K.
Fig. S6
Fig. S6 The phonon density of states for Si under different doped Ge concentrations.
Fig. R11 a
Fig. R11 a The
Fig. R12 the
Fig. R12 the ratio of decreased (increased) ( ) at different substrate lengths We have also noticed another work byLiu et al. [45]published during the peer review process of this study, which doped oxygen atoms in van der Waals crystal TiS3 nanoribbons to induce lattice contraction and increase Young's modulus, leading to enhanced thermal conductivity.Distinct from inducing defects to migrate directional nonequilibrium in this study that reveals a general mechanism to enhance thermal transport, the strategy provided by Liu et al. is available for limited material systems. | 8,997.8 | 2024-04-17T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Krebs Cycle Intermediates Protective against Oxidative Stress by Modulating the Level of Reactive Oxygen Species in Neuronal HT22 Cells
Krebs cycle intermediates (KCIs) are reported to function as energy substrates in mitochondria and to exert antioxidants effects on the brain. The present study was designed to identify which KCIs are effective neuroprotective compounds against oxidative stress in neuronal cells. Here we found that pyruvate, oxaloacetate, and α-ketoglutarate, but not lactate, citrate, iso-citrate, succinate, fumarate, or malate, protected HT22 cells against hydrogen peroxide-mediated toxicity. These three intermediates reduced the production of hydrogen peroxide-activated reactive oxygen species, measured in terms of 2′,7′-dichlorofluorescein diacetate fluorescence. In contrast, none of the KCIs—used at 1 mM—protected against cell death induced by high concentrations of glutamate—another type of oxidative stress-induced neuronal cell death. Because these protective KCIs did not have any toxic effects (at least up to 10 mM), they have potential use for therapeutic intervention against chronic neurodegenerative diseases.
Introduction
The Krebs cycle is a series of enzymatic reactions that catalyze the aerobic metabolism of fuel molecules to carbon dioxide and water, thereby generating energy for the production of adenosine triphosphate molecules. Many types of fuel molecules can be drawn into and utilized by the cycle, including acetyl coenzyme A, derived from glycolysis or fatty acid oxidation. In eukaryotic cells, most of the enzymes catalyzing the reactions of the Krebs cycle are found in the mitochondrial matrixes [1]. The compounds involved in the cycle-termed Krebs cycle intermediates (KCIs)-function as energy donors and precursors for the synthesis of amino acids, lipids, and carbohydrates [2].
This aerobic metabolism may pervade every aspect of biology and medicine, because many papers published within the last 10 years suggest that KCIs regulate epigenetic processes and cellular signaling, possibly via protein binding [3]. KCIs activate specific signaling transduction pathways and exert various biological actions, such as neuroprotection, anti-inflammation, osteogenesis, and anti-aging [3]. For example, external supplementation with pyruvate (PA), oxaloacetate (OAA), α-ketoglutarate (AKG), malic acid (MA) or fumarate (FA), but not lactic acid (LA), succinate (SA), citrate (CA) or iso-citrate (ICA) significantly extends the lifespan of Caenorhabditis elegans by activating various transcriptional factor(s)-dependent transcriptional pathways [4][5][6]. Although fragmental information about the physiological roles of KCIs in the brain is available, KCIs are also proposed as being cardioprotective agents against myocardical infarction [7]. FA and AKG have been proposed to protect cardiac muscles possibly through activating or inhibiting specific transcription factors, such as NF-E2 related factor 2 (NRF2) and hypoxia inducible factor (HIF-1), respectively. In neurons, PA prevents hydrogen peroxide (H 2 O 2 )-induced cell death [8,9], protects the brain against experimental stroke via an anti-inflammatory mechanism [10], and prevents the age-dependent cognitive deficits seen in a mouse model of Alzheimer's disease [11]. These protections are possibly due to its α-ketoacid structure, which can directly react with H 2 O 2 [8,9]. H 2 O 2 is a stable, uncharged, and freely diffusible reactive oxygen species (ROS) with a putative second messenger role in intracellular and extracellular signaling mechanisms [12]. The generation of H 2 O 2 is relatively high in the brain, partly because of the high activities of oxygen consumption and partly because of the high level of expression of monoamine oxidase in this tissue [13]. In pathological conditions such as ischemia-reperfusion and Alzheimer's disease, various cell types may produce large amounts of H 2 O 2 [13]. In addition to enzymatic defense mainly mediated by various enzymes (i.e., glutathione (GSH) peroxidase, catalase, and peroxiredoxins), non-enzymatic mechanisms can also contribute to the cellular defense against H 2 O 2 -induced cytotoxicity [13]. For instance, PA is abundant in mammalian cells and has the property of non-enzymatically reacting with H 2 O 2 [8,9]. PA protects neurons against both exogenous and endogenous H 2 O 2 , and it also inhibits cell death mediated by H 2 O 2 in neurons [8] and non-neuronal cells [9]. However, whether KCIs other than PA, OAA, and AKG really elicit neuroprotection through a simple direct interaction with exogenous H 2 O 2 at extracellular locations has not yet been fully clarified.
In this present study, we examined the neuroprotective effects of KCIs against two types of oxidative stress in neuronal HT22 cells and found that only three of them (PA, OAA, and AKG)-which have an α-keto acid structure-had significant neuroprotective effects by modulating the levels of ROS in the cells. We found that the other KCIs were not protective, suggesting that these neuroprotective KCIs-used singly or as a cocktail-could be a potential food supplement for the purpose of preventing chronic neurodegeneration.
HT22 Cultures and MTT Assay
HT22 hippocampal neuronal cells were cultured as described previously [14][15][16]. In HT22 cells, high concentrations (mM levels) of Glu can induce cell death by depleting intracellular GSH through inhibition of cystine influx, and relatively low concentrations (µM levels) of H 2 O 2 can induce cell death. Because HT22 cells do not have functional NMDA receptor proteins, they do not die due to excitotoxicity [14][15][16]. These cells were maintained in 10-cm dishes (Invitrogen, Carlsbad, CA, USA) containing 10 mL of Dulbecco's Modified Eagle medium supplemented with 10% (v/v) heat-inactivated (56 • C, 30 min) fetal calf serum (Invitrogen). The cells were seeded into 24-well plates at a density of 4 × 10 4 cells/cm 2 . When examining their effects on the H 2 O 2 toxicity, we added KCIs to the cultures after a 24-h incubation. Sixty minutes later, 100, 200, or 500 µM H 2 O 2 was added, and the cells were then incubated for an additional 24 h. We set differential pre-incubation time 24 h and 1 h for Glu-and H 2 O 2 -toxicity, respectively. This was to stabilize the response to H 2 O 2 , and also because the Glu toxicity is completely inhibited by the long preincubation (over 5 h) by an unknown mechanism [14][15][16]. When the effects of KCIs on oxidative Glu toxicity were examined, the cells were incubated for 1 h after having been seeded in wells of a 24-well plate, and the KCIs were then added to the cultures. Sixty minutes later, 5, 10, or 20 mM Glu was added, and the cells were then incubated for an additional 24 h. To evaluate survival of the HT22 cells, we performed the MTT assay [14][15][16].
DCF Assay
The extent of cellular oxidative stress was assessed by monitoring the formation of free-radical species by using DCFH-DA, as described elsewhere [17]. Cells were plated 24 h before initiation of the experiment at a density of 40,000 cells/well in 24-well plates. KCIs and 10 µM DCFH-DA were added to the cells 30 min before the measurement. The plate was set into a Spark10M (Tecan Japan, Tokyo, Japan) under an atmosphere of 5% CO 2 and temperature of 37 • C. H 2 O 2 (200 µM) or vehicle was added to wells at 30 min, and the cells were incubated further for 180 min. DCF fluorescence was measured at a 485-nm excitation wavelength and 538-nm emission wavelength at 10-min intervals. Fluorescence values were expressed as a percentage of the value for the untreated control.
Statistical Analysis
Experiments presented herein were repeated at least three times, with each experiment performed in quadruplicate. Data were presented as the mean ± SD. The statistical significance of differences was examined by performing Student's t-test.
Neuroprotective Effect of PA, OAA, and AKG on HT22 Cells
We used HT22 cells-a neuronal cell line derived from a mouse hippocampus-as a model for oxidative cell damage. Treatment of the cells with 100, 200, or 500 µM H 2 O 2 ( Figure 1A) or with 5, 10, or 20 mM Glu ( Figure 1B Figure 1A), whereas these KCIs were not-or very little if they were-protective against Glu toxicity ( Figure 1B). These KCIs were not toxic to HT22 cells up to 10 mM, but they caused a small but significant reduction in cell survival when used at 20-50 mM ( Figure 2). These results suggest that PA, OAA, and AKG could protect neuronal cells against H 2 O 2 , but not against Glu.
Regulation of ROS by PA, OAA, and AKG
As the neuroprotective effects of the KCIs may have been due to the chemical structure of α-keto acid by direct interaction with H 2 O 2 inside the cells, we next examined whether the KCIs could reduce the level of ROS in HT22 cells (Figure 3). We quantified the levels of ROS with the ROS-sensitive fluorescent indicator DCFH-DA by use of a Spark 10M device (Tecan Japan, Tokyo, Japan). Based on the fluorescence activity, H 2 O 2 (200 µM) increased the levels of ROS by 20-40-fold. ROS levels plateaued at 45-60 min of exposure. PA ( Figure 3A) or OAA ( Figure 3B) at 1, 2, or 10 mM, and 2 or 10 mM AKG ( Figure 3C) significantly lowered intracellular ROS formation; 1 mM AKG did not reduce the ROS level. These results suggest that these KCIs used at low mM levels effectively reduced the ROS level in neural cells. Figure 3A) or OAA ( Figure 3B) at 1, 2, or 10 mM, and 2 or 10 mM AKG ( Figure 3C) significantly lowered intracellular ROS formation; 1 mM AKG did not reduce the ROS level. These results suggest that these KCIs used at low mM levels effectively reduced the ROS level in neural cells. Values are the means ± SD from four experiments per group. Diamonds, control; circles, H2O2; squares, H2O2 + KCI. Note that the KCI alone groups were not shown in these graphs, because KCIs themselves did not affect ROS levels at any time point.
No Protective Effects by LA, CA, ICA, SA, FA, and MA against H2O2
Next, we assessed whether LA, CA, ICA, SA, FA, and MA could protect HT22 cells against H2O2, and found that these intermediates were not-or were very little if they were-protective against H2O2 (100, 200, and 500 μM)-mediated cytotoxicity (Figure 4). Then, we examined whether these KCIs could protect the cells against Glu toxicity. Additionally, except for SA against 5 mM Glu, these KCIs Next, we assessed whether LA, CA, ICA, SA, FA, and MA could protect HT22 cells against H 2 O 2 , and found that these intermediates were not-or were very little if they were-protective against H 2 O 2 (100, 200, and 500 µM)-mediated cytotoxicity (Figure 4). Then, we examined whether these KCIs could protect the cells against Glu toxicity. Additionally, except for SA against 5 mM Glu, these KCIs (at 1 mM) showed no protection against high concentrations (5, 10, and 20 mM) of Glu ( Figure 5). These results indicate that LA, CA, ICA, SA, FA, and MA could not protect HT22 cells against oxidative stress. LA, SA, FA, and MA were not toxic to the cells up to 10 mM. By some unknown mechanism, CA and ICA gave toxic effects when tested around 5-10 mM (Figure 2). A high concentration of FA had an exceptional action on the cells, because 10 mM FA gave significant protection against high concentrations of Glu ( Figure 5).
(at 1 mM) showed no protection against high concentrations (5, 10, and 20 mM) of Glu ( Figure 5). These results indicate that LA, CA, ICA, SA, FA, and MA could not protect HT22 cells against oxidative stress. LA, SA, FA, and MA were not toxic to the cells up to 10 mM. By some unknown mechanism, CA and ICA gave toxic effects when tested around 5-10 mM (Figure 2). A high concentration of FA had an exceptional action on the cells, because 10 mM FA gave significant protection against high concentrations of Glu ( Figure 5).
Discussion
Here, we found that the α-keto acid group-containing KCIs (PA, OAA, and AKG) could protect neurons against H2O2, possibly through direct interaction with H2O2, although we did not provide direct evidence for this chemical reaction itself. Because DCFH-DA is hydrolyzed by cytosolic esterase and is activated by ROS in the cytoplasm, the inhibition of ROS increase by PA, OAA, and AKG could be due to an H2O2-scavenging effect of them inside the cells. The conclusion of this study is illustrated in Figure 6A and B. PA, OAA, and AKG were neuroprotective, but not the other KCIs.
Discussion
Here, we found that the α-keto acid group-containing KCIs (PA, OAA, and AKG) could protect neurons against H 2 O 2 , possibly through direct interaction with H 2 O 2 , although we did not provide direct evidence for this chemical reaction itself. Because DCFH-DA is hydrolyzed by cytosolic esterase and is activated by ROS in the cytoplasm, the inhibition of ROS increase by PA, OAA, and AKG could be due to an H 2 O 2 -scavenging effect of them inside the cells. The conclusion of this study is illustrated in Figure 6A and B. PA, OAA, and AKG were neuroprotective, but not the other KCIs. For example, LA and MA-putative neuronal energy substrates that may produce protective KCIs through their metabolism [1,2]-did not protect the cells (Figures 4 and 5). AKG may have other protective effects than α-keto acids such as PA and OAA. AKG at 1mM protected the cells against H 2 O 2 (Figure 1) without causing a significant reduction in ROS levels ( Figure 3). Because AKG is reported to activate the degradation of HIF-1α subunit and reduce the expression of downstream enzymes [6,7], reduction of this pathway may have been involved in the protective effects of AKG. FA may be another exceptional KCI. FA at 10 mM protected the cells against Glu-mediated cytotoxicity ( Figure 5), although the other KCIs did not do so. Because FA is reported to activate the Nrf2 pathway and induce phase-2 enzymes [18], the activation of this pathway may have been involved in the FA-induced protection. protect cells against high Glu concentrations by inhibiting the decrease in GSH by induction of Phase-2 enzymes, the action of which enhances the production of GSH [24][25][26]. The increase in GSH by NRF2 activators is not sufficient for protection against H2O2 [24][25][26].
Conclusions
This present study importantly suggests that there are 2 distinctive oxidative mechanisms: one induced by H2O2 and the other by Glu. α-Ketoacid group-containing KCIs (PA, OAA, and AKG) protected the cells against the former but not against the latter, whereas Nrf2 activators act vice versa. The central nervous system is particularly vulnerable to oxidative damage due to its high energy expenditure and oxygen demand [24][25][26]. Elevated concentrations of free radicals and resultant oxidative damage, such as lipid peroxidation and protein carbonylation, have been repeatedly demonstrated in neurodegenerative disorders such as Alzheimer's disease, Parkinson's disease, and ischemic stroke [24][25][26]. Thus, PA, OAA, and AKG, being natural metabolic intermediates and energy Interestingly, the protection afforded by PA, OAA, and AKG was totally different than that of NRF2 activators such as carnosic acid [19,20], zonarol [21,22], and tert-butyl hydroquinone [23].
These activators can protect cells against a high concentration of Glu, but not against H 2 O 2 . Nrf2 activators protect cells against high Glu concentrations by inhibiting the decrease in GSH by induction of Phase-2 enzymes, the action of which enhances the production of GSH [24][25][26]. The increase in GSH by NRF2 activators is not sufficient for protection against H 2 O 2 [24][25][26].
Conclusions
This present study importantly suggests that there are 2 distinctive oxidative mechanisms: one induced by H 2 O 2 and the other by Glu. α-Ketoacid group-containing KCIs (PA, OAA, and AKG) protected the cells against the former but not against the latter, whereas Nrf2 activators act vice versa. The central nervous system is particularly vulnerable to oxidative damage due to its high energy expenditure and oxygen demand [24][25][26]. Elevated concentrations of free radicals and resultant oxidative damage, such as lipid peroxidation and protein carbonylation, have been repeatedly demonstrated in neurodegenerative disorders such as Alzheimer's disease, Parkinson's disease, and ischemic stroke [24][25][26]. Thus, PA, OAA, and AKG, being natural metabolic intermediates and energy substrates, exert antioxidant effects in the brain and other tissues susceptible to H 2 O 2 . | 3,747.8 | 2017-03-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Identification of the Histidine Protein Kinase KinB inPseudomonas aeruginosa and Its Phosphorylation of the Alginate Regulator AlgB*
The exopolysaccharide alginate is an important virulence factor in chronic lung infections caused by the bacteriumPseudomonas aeruginosa. Two positive activators for alginate synthesis, algB and algR, are members of a superfamily of response regulators of the two-component regulatory system. AlgB belongs to the NtrC subfamily of response regulators and is required for high-level production of alginate. In this study, an open reading frame encoding a polypeptide of 66 kDa, designatedkinB, was identified immediately downstream ofalgB. The sequence of KinB is homologous to the histidine protein kinase members of two-component regulatory systems. Western blot analysis of a P. aeruginosa strain carrying akinB-lacZ protein fusion and studies ofkinB-phoA fusions indicate that KinB localizes to the inner membrane and has a NH2-terminal periplasmic domain. A KinB derivative containing the COOH terminus of KinB was generated and purified. In the presence of [γ-32P]ATP, the purified COOH-terminal KinB protein was observed to undergo progressive autophosphorylation in vitro. Moreover, the phosphoryl label of KinB could be rapidly transferred to purified AlgB. Substitutions of the residues conserved among histidine protein kinases abolished KinB autophosphorylation. These results provide evidence thatkinB encodes the AlgB cognate histidine protein kinase.
The exopolysaccharide alginate is an important virulence factor in chronic lung infections caused by the bacterium Pseudomonas aeruginosa. Two positive activators for alginate synthesis, algB and algR, are members of a superfamily of response regulators of the twocomponent regulatory system. AlgB belongs to the NtrC subfamily of response regulators and is required for high-level production of alginate. In this study, an open reading frame encoding a polypeptide of 66 kDa, designated kinB, was identified immediately downstream of algB. The sequence of KinB is homologous to the histidine protein kinase members of two-component regulatory systems. Western blot analysis of a P. aeruginosa strain carrying a kinB-lacZ protein fusion and studies of kinB-phoA fusions indicate that KinB localizes to the inner membrane and has a NH 2 -terminal periplasmic domain. A KinB derivative containing the COOH terminus of KinB was generated and purified. In the presence of [␥-32 P]ATP, the purified COOH-terminal KinB protein was observed to undergo progressive autophosphorylation in vitro. Moreover, the phosphoryl label of KinB could be rapidly transferred to purified AlgB. Substitutions of the residues conserved among histidine protein kinases abolished KinB autophosphorylation. These results provide evidence that kinB encodes the AlgB cognate histidine protein kinase.
Chronic pulmonary infection with the bacterium Pseudomonas aeruginosa is a major factor in the poor prognosis and high mortality rate of patients with cystic fibrosis (CF) 1 (1). Most P. aeruginosa strains isolated from the CF respiratory tract overproduce an exopolysaccharide called alginate, which gives the colonies a mucoid morphology (2). This highly viscous polysaccharide plays a role in the pathogenesis of P. aeruginosa by imparting antiphagocytic properties (3) and an adherence mechanism (4). Most of the genes involved in alginate biosynthesis are in a tightly regulated operon at 34 min on the 75-min chromosome (5). High expression of the alginate biosynthetic genes requires the activation of an alternative sigma factor ( 22 ) encoded by algT (algU) at about 68 min on the chromosome (for review, see Ref. 6). In addition, a cascade of several positive regulators are also required for high expression of alginate genes (7,8). Two of these, AlgB and AlgR (AlgR1), belong to the superfamily of response regulators of prokaryotic two-component regulatory systems (9,10).
Two-component regulation is a mechanism for signal transduction to control cellular adaptations in response to environmental or physiological changes (for review, see Ref. 11). Observed in many bacterial species (12,13), as well as in yeasts (14) and plants (15), two-component systems generally include a histidine protein kinase and a cognate regulator protein. In general, the histidine protein kinase senses a specific environmental stimulus and undergoes autophosphorylation at a histidine residue present in a highly conserved carboxyl-terminal domain of the protein. This phosphate group is subsequently transferred to an aspartate residue in the amino terminus of the response regulator, resulting in a change in the activity of the response regulator that leads to an adaptive response (11,12). Response regulators can also catalyze kinase-independent phosphorylation and dephosphorylation by low-molecular weight phosphorylated compounds (e.g. acetyl phosphate, carbamyl phosphate, etc.), which may serve to integrate environmental control with the physiological status of the cell (16).
Alginate overproduction by P. aeruginosa is generally seen in strains causing pulmonary infection of CF patients. Specific signals present in the environment of the CF lung (e.g. dehydration, high osmolarity, limiting nutrients, antibiotics) may play a role in stimulating alginate production (for review, see Ref. 17). However, the role or requirement for any particular in vivo signal in the expression of alginate genes has not been well established. The discovery of two-component response regulators (i.e. AlgB and AlgR) suggests that environmental signals may play a role in the regulation of alginate production. Moreover, inhibitors of the two-component regulatory pathway inhibit the expression of alginate biosynthetic genes (18). However, proteins in P. aeruginosa with sensor kinase activity that can phosphorylate AlgB or AlgR have not been demonstrated. A gene adjacent to algR was recently identified that encodes a protein (FimS, AlgZ) with homology to an atypical two-component sensor (19,20), but whether it functions as a kinase of AlgR is unknown. In this study we identified a gene called kinB, located immediately downstream of algB, that encodes a protein with high similarity to typical histidine protein kinases of two component systems. Our data indicate that KinB is an inner membrane protein with histidine protein kinase activity that is capable of promoting autophosphorylation and rapid transfer of the phosphate to AlgB.
EXPERIMENTAL PROCEDURES
Bacterial Strains and Growth Conditions-The P. aeruginosa strains utilized in this study were FRD1, an alginate-overproducing (Alg ϩ ) CF isolate and its derivative FRD444 (Alg Ϫ , algB::Tn501), which contains a mercury resistance (Hg r ) transposon marker in algB (21). Escherichia coli strains HB101 and JM109 were used in routine cloning manipulations (22); BL21(DE3) was used to express His 6 -tagged KinB; XL-2 Blue was used to overexpress AlgB. L broth (10.0 g of tryptone (Difco), 5.0 g of yeast extract (Difco), 5.0 g of NaCl/liter, pH 7.5) was used for the routine culture of P. aeruginosa and E. coli. A 1:1 mixture of Pseudomonas isolation agar (Difco) and L agar was used to select for P. aeruginosa following triparental matings. Selective antibiotics used for P. aeruginosa were carbenicillin at 300 g/ml and tetracycline at 100 g/ml; selective antibiotics used for E. coli were ampicillin at 100 g/ml, kanamycin at 35 g/ml, and tetracycline at 15 g/ml. HgCl 2 was used at 18 g/ml both for P. aeruginosa and E. coli.
Nucleic Acid Manipulations and Plasmids-Cloned DNA fragments utilized in this study are shown in Fig. 1. Most routine genetic manipulations were performed as described elsewhere (22). Plasmid DNA was isolated from E. coli using Qiagen columns and procedures (Qiagen Corp.). Genomic DNA of P. aeruginosa was prepared using a protocol previously described (21). Restriction endonucleases were purchased from Boehringer Mannheim and New England Biolabs. To isolate DNA that included sequences located downstream of algB, chromosomal DNA from FRD444 (algB::Tn501) was digested with BamHI (where Tn501 is not cut by BamHI), ligated into cosmid vector pEMR2 (23), packaged in vitro into particles (Gigapack II cloning kit, Stratagene), and transduced into HB101. One representative clone (pDJWA10, Fig. 1) that conferred Hg r contained approximately 15-kb DNA upstream and 10-kb DNA downstream of algB. Plasmid pDJW130 (Fig. 1) had a 0.8-kb XhoI-EcoRI fragment from pJG12 (24) cloned into vector pKS(Ϫ) that was used as a hybridization probe; it was digoxigenin-labeled by the polymerase chain reaction using T3 and T7 primers. This probe was used to identify a 4-kb ClaI-HindIII fragment from pDJWA10 which contained a portion of algB and the entire kinB gene (below), which was then cloned into pUC19 to form pSM67 (Fig. 1).
DNA Sequencing and Analysis-To prepare the DNA downstream of algB for sequence analysis, the 4-kb ClaI-HindIII fragment of pSM67 was digested with PstI or partially digested with Sau3AI, and the resulting fragments were subcloned into the PstI or the BamHI site of M13mp19 (New England Biolabs), respectively. Single-stranded DNA templates were prepared from these M13mp19 clones using a sample preparation protocol (Applied Biosystems). DNA sequencing reactions were performed with a Taq Dyedeoxy terminator cycler sequencing kit (Applied Biosystems) using a Perkin-Elmer DNA thermal cycler and run on an Applied Biosystems 373A DNA sequencer. DNA fragments were sequenced on both strands, and the sequence data obtained were aligned using SeqMan software (DNASTAR) on an Apple Macintosh computer. To verify alignment of the sequence contigs, six additional sequences were obtained by manual sequencing of pSM67 ( Fig. 1) using T7 DNA polymerase version 2, the 7-deaza-dGTP sequencing kit (Life Sciences), and synthesized oligonucleotide primers: p50 (5Ј CGGCT-GTCCTTCTCCAGGTC 3Ј), p51 (5Ј CCACTACACCTCCACCGATC 3Ј), p52 (5Ј CAAGCGCACGGTATCACC 3Ј), p53 (5Ј GCATATCGACGCT-GAGCATG 3Ј), p54 (5Ј CGGTGGTGCTGGCCTGG 3Ј), and p55 (5Ј CGCCATTGTCTTCCACCGC 3Ј). Homology searches and alignments were performed with the Basic Local Alignment Search Tool (BLAST) Network Service at the National Center for Biotechnology Information, National Institutes of Health (25).
Construction and Analysis of LacZ Fusion Proteins-To construct a kinB-lacZ protein fusion, a 2.6-kb EcoRI fragment containing algB-kinBЈ was cloned into pMLB1034 (26), resulting in pSM78; this was followed by the introduction of a mob site on a EcoRI fragment (27) to form pSM82 (Fig. 1). An algB-lacZ protein fusion containing the aminoterminal 379 amino acids of AlgB was constructed by cloning a 3.4-kb SmaI-DraI fragment of pMLB1034 containing lacZ into the EcoRV site of algB in pJG221 (21) to form pSM33; this was followed by the introduction of a mob site on a HindIII fragment (27) to form pSM35 (Fig. 1). Plasmids were later moved into P. aeruginosa FRD1 by triparental mating as described previously (24), which resulted in their integration into the chromosome by homologous recombination. Expression of LacZ fusion proteins in P. aeruginosa was evident by their -galactosidase activity as detected by the formation of blue colonies on L agar plates containing 5-bromo-4-chloro-3-indolyl--D-galactoside at 75 g/ml. For immunoblot analyses of LacZ protein fusions, overnight cultures of the P. aeruginosa strains carrying lacZ fusions (FRD1::pSM82 and FRD1::pSM35) were diluted 1:50 in 100 ml of L broth with antibiotics and agitated at 37°C to A 600 0.7. Cells were resuspended in 10 ml of A buffer (10 mM Tris-HCl, pH 7.4, 1 mM EDTA, 1 mM EGTA, 0.2 mM dithiothreitol), passed twice through a French press (15,000 p.s.i.), and centrifuged at 10,000 ϫ g for 10 min at 4°C to remove unbroken cells. The supernatant obtained was used as the whole cell extract. A sample was centrifuged at 200,000 ϫ g for 60 min at 4°C, and the supernatant was regarded as the fraction enriched for cytoplasmic proteins; the pellet was resuspended in 1.0 ml of A buffer and regarded as the fraction enriched for membrane proteins. After determining the protein concentration by the Bradford method (28), samples were diluted in sodium dodecyl sulfate (SDS) sample buffer (60 mM Tris hydrochloride, pH 6.8, 2% SDS, 10% glycerol, 0.1 mg of bromphenol blue/ml, 5% 2-mercaptoethanol), and 30 g of each fraction were subjected to electrophoresis on a SDS-8% polyacrylamide gel. Proteins were electrotransferred to nitrocellulose membrane, and LacZ fusion proteins were detected with rabbit anti--galactosidase polyclonal antibody (5 Prime 3 3 Prime, Inc., Boulder, CO; at a 1:5,000 dilution) as the primary antibody, and goat anti-rabbit horseradish peroxidase conjugate (Sigma; 1:30,000 dilution) was used as the secondary antibody. Protein bands were visualized with chemiluminescent Western blot detection reagents ECL (Amersham Corp.) and visualized on film (Kodak X-Omat AR) exposed for 2 min.
Analysis of PhoA Fusion Proteins-Constructions containing kinB-phoA translational fusions were based on the plasmid pSM111 ( Fig. 1). Polymerase chain reaction amplification was used to generate DNA fragments starting at the MluI site in algB to sites in kinB terminating at codons for Asp-148 or Ile-211, at which the primers generated BamHI sites. These MluI-BamHI fragments were each joined to a 2.6-kb BamHI-XbaI fragment containing phoA from pPHO7 (29), and cloned into pSM111, replacing the existing MluI-XbaI fragment, to form pSM126 and pSM127, respectively. A KinB-PhoA fusion with a junction at residue F379 was constructed by using a linker to join a MluI-EcoRI restriction fragment containing "algB-kinB" to a 2.6-kb SmaI-XbaI fragment containing phoA from pPHO7; this was cloned into pSM111, replacing the existing MluI-XbaI fragment, to form pSM128 (Fig. 1). Protein fusions containing PhoA (alkaline phosphatase) were verified by Western blot analysis using rabbit anti-alkaline phosphatase (Sigma). Colonies containing PhoA fusions with alkaline phosphatase activity (i.e. localized to the periplasm) were screened for blue color on L agar containing 5-bromo-4-chloro-3-indolyl phosphate at 40 g/ml.
Purification of the COOH Terminus of KinB (C-KinB)-To construct plasmids that overexpressed a His 6 -tagged carboxyl-terminal (Gly-198 to Val-595) fragment of KinB (HC-KinB), a 1.6-kb AscI fragment of pSM67 ( Fig. 1) was cloned into pNEB193 (New England Biolabs) to form pSM93; this was subsequently digested with SacI and HindIII, and the 1.6-kb fragment containing kinB was cloned in pET28.b (Novagen), resulting in pSM95 ( Fig. 1). E. coli BL21(DE3) harboring pSM95 was agitated overnight at 37°C in 100 ml of L broth with kanamycin. The cells were harvested by centrifugation and resuspended in 100 ml of L broth with kanamycin and 1 mM isopropyl -D-thiogalactopyranoside for induction of the tac promoter (Ptac). After incubation at 30°C with aeration for 2 h, the cells were harvested and incubated in 20 ml of 20 mM Tris-HCl, pH 8.0, containing lysozyme (100 g/ml) for 30 min at 4°C. Following sonication, the lysate was centrifuged at 12,000 ϫ g for 15 min, and the supernatant was filtered (0.45-m disc filter, Millipore). HC-KinB in the cell extract was purified on a 2.5-ml His-Bind nickel column (Novagen) according to manufacturer's protocol. As estimated by SDS-PAGE and Coomassie Blue staining, HC-KinB was over 95% pure. The His 6 tag was removed from HC-KinB by digestion with 25 units/ml thrombin (Novagen) for 2 h at 22°C to form C-KinB, which was subjected to an amino-terminal sequence analysis (Biotechnology Center, St. Jude Children's Research Hospital, Memphis, TN).
Purification of AlgB-E. coli XL21 Blue harboring pDJW52 ( Fig. 1) expresses algB under the control of Ptac as described previously (9). Cells from 400 ml of overnight culture of XL21 Blue(pDJW52) were resuspended and agitated in 400 ml of fresh L broth with ampicillin and 1 mM isopropyl -D-thiogalactopyranoside at 30°C for 3 h. Cells were harvested and then incubated for 30 min at 4°C in 20 ml of 20 mM Tris-HCl, pH 8.0, containing lysozyme (100 g/ml). Following sonication, the lysate was centrifuged at 12,000 ϫ g for 10 min, and the supernatant was centrifuged at 200,000 ϫ g for 60 min. Proteins in the clear supernatant were precipitated with 35% ammonium sulfate (J. T. Baker Inc.), and the precipitant was resuspended and dialyzed against 15 mM BisTris propane, pH 7.0, 20 mM NaCl. A sample (20 ml containing 8.4 mg of protein) was loaded onto an AP-2 column (Waters) packed with Protein-PAK DEAE 40HR anion exchange matrix (Waters), and a linear 20 -160 mM NaCl gradient in 15 mM BisTris propane, pH 7.0, was used to elute proteins from the column. AlgB eluted at 130 mM NaCl and was estimated by SDS-PAGE and Coomassie Blue staining to be Ͼ90% pure.
In Vitro Phosphorylation Assays-Autophosphorylation of C-KinB was performed at 22°C in P buffer (50 mM Tris-HCl, pH 7.5, 50 mM KCl, 5 mM MgCl 2 ). C-KinB was diluted to a final concentration of 2.5 M and distributed in 9-l aliquots for each reaction. Each reaction was started by adding [␥-32 P]ATP (30 Ci/mmol, Amersham) to a final concentration of 33.3 M and was stopped by the addition of 3 l of 5 ϫ SDS sample buffer. Unincorporated label was removed by passage through a 1-ml Sephadex G-25 (Pharmacia Biotech Inc.) column, and samples were electrophoresed on a SDS-10% polyacrylamide gel and examined by autoradiography. To examine the time course of C-KinB autophosphorylation, phosphorylation reactions were stopped by adding 10 l of 200 mM sodium acetate, pH 4.0, and immediately spotting the mixture onto a phosphocellulose membrane (Beckman) pre-equilibrated with 25 mM sodium acetate, pH 4.0. The membranes were washed three times for 10 min each in 800 ml of buffer containing 25 mM sodium acetate, pH 4.0, and the radioactivity on the dried membranes was measured (TRI-CARB 2000 liquid scintillation analyzer). In studies demonstrating the transfer of phosphoryl label from C-KinB to AlgB, 13 pmol of KinB was phosphorylated for 60 min at 22°C in a 10-l mixture under the conditions described above. AlgB (40 pmol) was added to the mixture, and the reaction was terminated after 90 s by adding 3 l of 5 ϫ SDS sample buffer. The samples were passed through a 1-ml Sephadex G-25 column to remove unincorporated label and analyzed by SDS-10% PAGE, followed by autoradiography.
Nucleotide Sequence Accession Number-The nucleotide sequence data and inferred amino acid sequence reported here for kinB have been deposited in the GenBank™ data base under accession number U97063.
RESULTS
Cloning and Identification of kinB-We examined whether a gene (kinB) encoding a sensor kinase was closely linked to a known gene (algB) encoding a response regulator that controls the alginate biosynthetic operon in P. aeruginosa. Several studies on bacterial two-component regulatory systems have shown that genes encoding a response regulator and its cognate histidine protein kinase are often linked (12). A 25-kb BamHI fragment containing the DNA flanking algB (pDJWA10, Fig. 1) was obtained from genomic DNA of P. aeruginosa FRD444, a strain with an algB::Tn501 allele (21) that provided a selectable marker (mercury resistance) for the DNA in this region. A 4-kb ClaI-HindIII fragment was then subcloned from the region immediately downstream of algB (pSM67, Fig. 1). This was subjected to a sequence analysis, and the putative kinB ORF of 1,788 bp was observed in the same direction of transcription as algB (Fig. 2). The kinB ORF had a translation initiation codon that overlapped with the algB termination codon, suggesting that expression of algB and kinB may be translationally coupled. The kinB ORF predicted a polypeptide of 595 amino acids with a molecular weight of 66,078. Two hydrophobic domains at the amino terminus of KinB were observed (underlined in Fig. 2). An 11-base pair inverted repeat sequence was located 75 bp downstream of the kinB ORF that may serve as a factor-independent terminator (shown as hatched lines in Fig. 2).
The kinB Gene Encodes a Protein with Homology to Histidine Protein Kinases-A homology search showed that the KinB sequence was similar to a number of histidine protein kinases in two-component regulatory systems. Fig. 3 depicts an alignment of KinB sequences with that of PhoR, a similar sized histidine protein kinase in Bacillus subtilis (30). Overall, KinB shows 31% identity and 59% similarity with PhoR. The most conserved sequences were in four regions that are characteristic of histidine protein kinases (marked with hatched boxes in Fig. 3). The H box is proposed to be the phosphorylation domain and may also be involved in the dimerization of the kinase monomers; the N, D/F, and G boxes are proposed to form a nucleotide binding surface in the tertiary structure within the active site (31). The residues in these boxes that are believed to be critical (marked with triangles in Fig. 3) were all conserved in KinB and PhoR. In addition, both KinB and PhoR contained two hydrophobic domains that were similarly positioned in their amino termini (underlined and overlined in Fig. 3). Both hydrophobic regions of KinB were sufficient in length to form transmembrane domains, suggesting that KinB may be localized to the inner membrane, as is PhoR.
Membrane Localization of KinB-LacZ in P. aeruginosa-A KinB-LacZ fusion protein (encoded by pSM82, Fig. 1) was constructed to test the expression of the kinB ORF in P. aeruginosa. The KinB-LacZ was predicted to be a 157.4-kDa peptide, since the amino-terminal 379 amino acids of KinB (41.6-kDa) was fused to a lacZ derivative expressing all but the first eight amino acids of LacZ (115.8 kDa). As a control, an AlgB-LacZ fusion protein of 151.4 kDa was constructed (encoded by pSM35, Fig. 1). The plasmids containing the kinB-lacZ and algB-lacZ fusion genes were in suicide vectors, and their mobilization to P. aeruginosa FRD1 resulted in chromosomal integration at the site of DNA homology (Fig. 4A). FRD1::pSM82 and FRD1::pSM35, harboring the respective kinB-lacZ and algB-lacZ fusions in single copy, both showed -galactosidase activity, indicating that each ORF expressed a protein in P. aeruginosa. The kinB-lacZ and algB-lacZ encoded fusion proteins were also analyzed in a Western blot analysis of whole cell extracts, using polyclonal antibody specific for LacZ (Fig. 4B, lanes 1 and 4); this showed that their electrophoretic mobilities were consistent with the sizes predicted. The KinB-LacZ fusion produced in FRD1::pSM82 contained a large amino-terminal fragment of KinB that included both putative transmembrane domains. To test whether the KinB-LacZ hybrid localized to the membrane, whole cell extracts of FRD1::pSM82 were used to obtain fractions enriched for either cytoplasmic or membrane proteins. Extracts containing the AlgB-LacZ (FRD1::pSM35) were processed in parallel. Using anti-LacZ in the Western blot analysis, KinB-LacZ was detected in the membrane fraction, but not in the cytoplasmic fraction (Fig. 4B, lane 2), suggesting that KinB was indeed associated with the membrane. In contrast, the AlgB-LacZ fusion protein, which does not contain a potential transmembrane domain (9), was detected in the cytoplasmic fraction (Fig. 4B, lane 6), but not in the membrane fraction.
Study of Membrane Topology with KinB-PhoA Fusions-The two hydrophobic domains in the amino terminus (residues 13-39 and 170 -190) of KinB, which may serve as transmembrane domains, were evident in the hydrophilicity plot (Fig. 5A). Thus, the region between the two putative transmembrane domains (residues 40 -169) of KinB was predicted to be in the periplasmic space. To test this, KinB-PhoA fusions were constructed with junctions at residue Asp-148, Ile-211 and Phe-379 (Fig. 5B). All three KinB-PhoA fusions expressed proteins of the predicted size that were readily detected in whole cell extracts of E. coli using a Western blot analysis and antibody specific for PhoA (Fig. 5C). In that the phoA gene encodes the periplasmic enzyme alkaline phosphatase, such protein fusions are enzymatically active (PhoA ϩ ) only if translocated to the FIG. 2. Sequence of kinB and its inferred amino acid sequence. P. aeruginosa DNA was sequenced by the chain termination method. Numbers at the right represent nucleotides, and some pertinent restriction sites are shown. The hatched lines indicate a potential factorindependent transcription termination sequence. The boxed sequence represents a potential ribosome binding site (RBS) for kinB. The asterisks mark the termination codon for algB and kinB. The amino acid sequences of the predicted carboxyl terminus of AlgB and full length of KinB are shown under the nucleotide sequence. Numbers to the left represent amino acids in KinB. Highly conserved amino acid residues in KinB that are characteristic of histidine protein kinases are reversely highlighted. Underlined amino acids represent putative membrane-spanning domains in KinB.
periplasm (32). The KinB(D148)-PhoA fusion retained the first transmembrane domain and was PhoA ϩ in E. coli (Fig. 4B), suggesting that the amino terminus of KinB between the two transmembrane domains was periplasmic. In contrast, bacteria expressing KinB(I311)-PhoA and KinB(F379)-PhoA, where fusions were downstream of the two transmembrane domains, were not enzymatically active for PhoA (Fig. 5B). This suggests that the COOH terminus of KinB was localized to the cytoplasm. Thus, the KinB amino terminus appears to be the periplasmic sensor domain and the C terminus contains the cytoplasmic histidine kinase domain.
Autophosphorylation of KinB-The localization of KinB to the membrane complicated the purification of native protein for studies of its potential histidine protein kinase activity. However, all of the conserved sequences for kinase activity were present in the cytoplasmic carboxyl terminus. Thus, we tested the possibility that a carboxyl-terminal fragment of KinB may be enzymatically active, as is the case for several other membrane associated sensor kinases (33)(34)(35)(36). DNA encoding a carboxyl-terminal fragment of KinB from Gly-198 to the end (Val-595) was cloned in frame into the His tag vector pET28.b to form pSM95 (Fig. 1). This plasmid expressed a His 6 -tagged fusion protein (HC-KinB), which was purified using a nickel sulfate affinity column. To remove the His 6 sequence, the purified fusion protein was digested with thrombin, which recognizes a site between the His 6 tag and C-KinB sequence. However, an amino-terminal sequence analysis of C-KinB revealed that thrombin (which has arginine as its preferred site) also cleaved the KinB protein between residues Arg-243 and Gln-244 to generate a 39.4-kDa C-KinB polypeptide. Nevertheless, this C-KinB fragment still retained all the sequences predicted to function as a histidine protein kinase (see Fig. 3). To test this, C-KinB was incubated with 32 Plabeled ATP, and then the samples were subjected to SDS-PAGE and autoradiography. C-KinB (25 pmol) incubated with [␥-32 P]ATP (33 M) showed progressive autolabeling over the 1-60-min period examined (Fig. 6A, lanes 1-6). Incubation with [␥-32 P]ATP at 33 or 66 M for 40 min showed similar labeling of C-KinB (compare lanes 5 and 7), suggesting that ATP was not a limiting factor in these reactions. Accordingly, incubation with 50 pmol of C-KinB for 40 min did show increased labeling (Fig. 6A, compare lanes 5 and 8). As a control, C-KinB (25 pmol) was incubated with [␣-32 P]ATP (15 M) for 60 min (Fig. 6A, lane 9), and no labeling was observed; this ruled out the possibility of nonspecific binding of ATP by C-KinB. The autoradiogram showing autolabeling of C-KinB suggested that the level of protein phosphorylation (i.e. the balance of autophosphosphorylation and dephosphorylation) was not maximum by 60 min. Thus, a quantitative time course of C-KinB
FIG. 4. Membrane localization of KinB-LacZ in P. aeruginosa.
A, illustration of the strategy to produce KinB-LacZ and AlgB-LacZ fusion proteins by the recombinational integration of plasmids encoding kinB-lacZ (pSM82, Fig. 1) or algB-lacZ (pSM35, Fig. 1) into the chromosome of P. aeruginosa. The vector's bla gene, encoding carbenicillin resistance, was used for selection. B, Western blot analysis of KinB-LacZ and AlgB-LacZ fusion proteins. The proteins (30 g) in whole cell extracts (W), and in fractions enriched for membrane proteins (M) or cytoplasmic proteins (C), were subjected to SDS-8% PAGE. Lanes 7 and 8 contained a whole cell extract of strain FRD1 (Pa) and -galactosidase (LacZ), respectively. Proteins were transferred to a nitrocellulose membrane, and an immunostain was performed using an anti--galactosidase (Anti-LacZ) polyclonal antibody as the probe. autophosphorylation was performed using liquid scintillation (Fig. 6B). This showed that the maximum level of phosphorylated C-KinB under these conditions did not reach a plateau until approximately 5 h of incubation. One possible reason for this overall slow reaction was a high rate of C-KinB dephosphorylation. However, this appeared not to be the case because the phosphoryl label on C-KinB was stable after incubation with a chase of cold ATP (333 M) for 30 min (Fig. 6C).
C-KinB Mutants Altered at Conserved Sequences Are
Affected in Autophosphorylation-We tested whether autophosphorylation activity required sequences in KinB that are homologous to those of other sensor kinases. Critical residues in histidine protein kinases that were conserved in C-KinB (described above, see Fig. 3) were altered by site-directed mutagenesis of kinB. Mutant alleles of kinB were generated that expressed the following mutant HC-KinB proteins: H385K and H385Q, in which His-385 in the H box (i.e. the predicted site of phosphorylation) was changed to Lys and Gln, respectively; N504Q, where Asn-504 in the N box was mutated to Gln; D532N and D532E, in which Asp-532 of the D/F box was changed to Asn or Gln, respectively; and G560A where Gly-560 in the G box was substituted for Ala. Mutant derivatives of HC-KinB were purified in the same manner as wild-type HC-KinB and estimated to be Ͼ95% pure by SDS-PAGE. The His 6 tags on these proteins were also removed by thrombin digestion. Equivalent amounts of wild-type and mutant C-KinB derivatives, after treatment with thrombin, were examined by SDS-PAGE for relative stability of the proteins (Fig. 7A). Only C-KinB D532E (Fig. 7A, lane 7) showed any evidence of degradation beyond removal of the His 6 -Arg-243 peptide (despite 27 other Arg residues, the preferred site of thrombin cleavage). When each protein (2.5 M) was incubated with [␥-32 P]ATP (33 M), the wild-type C-KinB sequence showed strong autophosphorylation activity (Fig. 7B, lane 1). However, labeling of the mutant proteins was undetectable except for the C-KinB D532N derivative in which a trace amount of phosphorylated protein was detected (Fig. 7B, lane 5). The C-KinB E257Q protein had a substitution at a nonconserved residue, and it showed autophosphorylation that was comparable with that of wild-type (Fig. 7B, lanes 8). 1-6, respectively). Positions of protein size markers are shown on the left. To demonstrate that ATP was in excess and C-KinB was limiting, the concentrations of either [␥-32 P]ATP or C-KinB in the reactions were doubled, and phosphorylation was carried out for 40 min (lanes 7 and 8, respectively). Incubation of C-KinB with 15 M [␣-32 P]ATP for 60 min under the same conditions was performed to confirm that nonspecific ATP binding was not a factor (lane 9). B, time course of autophosphorylation of C-KinB. Samples containing C-KinB and [␥-32 P]ATP (as described above) were incubated for 0 -5 h and then spotted onto a phosphocellulose membrane, which was then washed to remove unincorporated label. The incorporation of 32 P into C-KinB was determined by the radioactivity (counts/min (CPM)) retained on the membranes. The plot shown was based on the average of three independent experiments. C, to determine the stability of phosphorylated C-KinB, C-KinB (2.5 M) was incubated with 33.3 M [␥-32 P]ATP at room temperature for 1 h, and 333 M unlabeled ATP was added to each of the reaction mixtures. The reactions were terminated after 0, 2, 4, 8, 15, and 30 min, removed of unincorporated label, and subjected to SDS-10% PAGE followed by autoradiography (lanes 1-6, respectively).
Phosphotransfer from C-KinB to AlgB-To determine whether AlgB-KinB may function as a two-component regulatory pair, the ability of phosphorylated C-KinB to donate a phosphate group to AlgB was examined. AlgB was overexpressed in E. coli and purified (Ͼ90%) using standard chromatographic procedures (Fig. 8A, lane 4). Purified AlgB alone was not autophosphorylated when it was incubated with [␥-32 P]ATP as determined by SDS-PAGE and autoradiography (Fig. 8B, lane 1). As shown above, purified C-KinB (1.3 M) incubated with [␥-32 P]ATP for 60 min showed autophosphorylation (Fig. 8B, lane 2). However, when AlgB (40 pmol) was incubated for 90 s with autophosphorylated C-KinB (K*), AlgB became radiolabeled, and complete dephosphorylation of C-KinB was also observed (Fig. 8B, lane 4). Other studies of response regulators (e.g. CheY) indicate that Mg 2ϩ is required for phosphorylation (37). This also appears to be the case with AlgB, since no AlgB phosphorylation was observed when the protein was preincubated with EDTA to chelate divalent cations (Fig. 8B, lane 3). In other experiments, maximum phosphotransfer from 32 P-C-KinB to AlgB was observed after only 20 -40 s of incubation (data not shown). Taken together, the above results show that KinB in P. aeruginosa is a member of the sensor kinase superfamily with histidine kinase activity that can rapidly phosphorylate its cognate response regulator, AlgB.
DISCUSSION
The genes involved in alginate biosynthesis are under complex control by a cascade of regulators (6,8). Two positive regulators of alginate production, AlgB and AlgR, affect transcriptional activation of the alginate biosynthetic operon at algD, and both have sequence similarity to the family of response regulators of two-component systems (9,10). This suggested that the production of alginate by P. aeruginosa is influenced by environmental factors, some of which may be found in the unique environment of the CF lung (17). Prior to the recent description of FimS and its association with AlgR (19), no putative cognate sensor for AlgR had been recognized. However, FimS (also known as AlgZ) does not possess sequence similarity to typical histidine protein kinases (19,20). The goal of this study was to identify KinB, a cognate sensor for AlgB, followed by tests for their potential interaction via phosphorylation. In that genes encoding histidine protein kinases are often closely linked to genes for their cognate response regulators (12), we examined the DNA immediately downstream of algB, and as a result kinB was discovered. KinB had a predicted molecular mass of 66 kDa and showed sequence similarity to many histidine protein kinases of two-component regulatory systems. KinB had all four conserved "boxes" characteristic of histidine protein kinases. Like many of them, KinB also had two hydrophobic domains at the amino terminus that are of sufficient length and hydrophobicity to span the inner membrane. These observations led to an analysis of a KinB-LacZ fusion protein in P. aeruginosa that suggested that KinB was indeed a membrane protein. An analysis of KinB-PhoA fusions supported the predicted membrane topology of KinB that the region between the two hydrophobic domains was in the periplasm. The COOH terminus of KinB, which contained amino acid residues conserved with other sensor kinases, was apparently localized to the cytoplasm. During appropriate in vivo conditions, the amino-terminal domain may act as an environmental sensor of some unknown factor(s) and transduce that information to the cytoplasmic domain to affect its kinase activity. It is difficult to speculate at this time just what environmental signal(s) might activate KinB, as its periplasmic domain has no significant similarity with any other known protein.
Most sensor kinases studied are capable of undergoing autophosphorylation at a conserved histidine residue in the H domain of the protein (38). Purified C-KinB was shown in this study to undergo progressive autophosphorylation when incubated with [␥-32 P]ATP. Interestingly, the level of autophosphorylated protein did not reach its maximum until about 5 h at room temperature in the presence of excess [␥-32 P]ATP. This rate is quite slow when compared with the autophosphorylation of other sensor proteins under similar conditions. These sensors include derivatives of ArcB (35) and EnvZ (39) that were deleted of their amino-terminal transmembrane domains, and they have been shown to reach maximum autophosphorylation within minutes. Since the phosphorylated form of C-KinB appeared quite stable, a high intrinsic phosphatase activity is not likely, and an explanation for the atypically slow autophosphorylation of C-KinB is not currently available. However, it is possible that the deletion of the amino terminus affected its autophosphorylation activity, even though C-KinB contained the entire kinase domain. The oligomeric state of many sensor kinases is important for their autophosphorylation activity (40 -42). The periplasmic domain of some kinases facilitates dimerization when it is bound by environmental stimulatory ligands (43,44). The rapid autophosphorylation seen in aminotruncated ArcB and EnvZ may be due to strong protein-protein interactions that remain between the monomers, which is suggested by the observed aggregation and precipitation of truncated ArcB and EnvZ with the membrane fraction when overexpressed in E. coli (35,39). In contrast, when C-KinB was overexpressed, it remained soluble. It is currently not clear whether the native form of KinB forms a dimer or whether dimerization affects KinB autophosphorylation activity. Another explanation for the observed kinetics of C-KinB phosphorylation also relates to the soluble nature of C-KinB. When "tethered" to a membrane, as is the case for native KinB, the effective concentration of KinB may be higher than that observed with the soluble C-KinB used in these studies. In addition, the reaction condition for the C-KinB autophosphorylation assay used here may not be optimal for this protein, although similar conditions were used in the phosphorylation of truncated ArcB and EnvZ (35,39).
Since the sequence of KinB showed high homology with other sensor kinases, substitutions of the conserved residues were made to verify that KinB is a new member of this conserved superfamily of histidine protein kinases. When the predicted histidine phosphorylation site in KinB (His-385 in the H box) was changed to either a lysine or a glutamine, autophosphorylation of C-KinB was completely lost. Moreover, mutations affecting other conserved boxes all had deleterious effects on the kinase activity, suggesting that KinB is a typical histidine protein kinase. Interestingly, while no phosphorylated protein was detected when Asp-532 in the D/F box was substituted for a glutamate, changing the same residue to an asparagine permitted some residual C-KinB autophosphorylation.
The ability of phospho-C-KinB to phosphorylate the purified response regulator AlgB was also demonstrated. When AlgB was incubated with the phosphorylated C-KinB at a molar ratio of 3 to 1, the phosphoryl group was rapidly transferred to AlgB and completed by 40 s. This rate is similar to that observed between other sensor-regulator pairs (37,45). Also, similar to the phosphorylation of other response regulators (37,45), AlgB phosphorylation was inhibited by EDTA, suggesting the requirement of Mg 2ϩ in the phosphorylation reaction. Magnesium has been shown to bind at an aspartate-rich acid pocket within the active site of the response regulator phosphorylation domain. Binding of Mg 2ϩ causes conformation changes in the response regulator, and this likely facilitates the phosphotransfer reaction between histidine protein kinases and response regulators (46 -49). Previous studies with the alginate response regulator AlgR demonstrated that AlgR was capable of being phosphorylated by the well characterized histidine protein kinase CheA and by small phospho-donor molecules (50). Despite numerous attempts, AlgB could not be phosphorylated by CheA (data not shown). This suggests that phosphorylation of AlgB by C-KinB has a relatively high specificity. The possibility of AlgR phosphorylation by C-KinB, as well as the involvement of small phospho-donor molecules in AlgB phosphorylation, are currently being examined.
At least three other sensor kinase-regulator pairs have been reported in P. aeruginosa, but this is the first case that in vitro phosphorylation of the sensor and the regulator has been demonstrated in this organism. Besides AlgB-KinB, there are two other typical two-component regulatory systems: PilS-PilR are involved in the regulation of expression of type IV fimbriae (49), and PfeS-PfeR control the expression of the ferric enterobactin receptor, PfeA (51). The genes for the histidine protein kinase and the response regulator in each of these two systems are also next to each other (49,51). The organization of pfeR-pfeS is strikingly similar to that of algB-kinB, in that the start codon for pfeS also overlaps the stop codon for pfeR (51). The three kinases, PilS, PfeS, and KinB, all have conserved residues characteristic of histidine protein kinases, but little homology beyond that. It appears likely that KinB responds to signals different from that of PilS and PfeS. Recently, another sensor-kinase pair, FimS-AlgR, has been suggested to belong to a new family of transmitter-receiver response regulators (19,20). However, in that the predicted FimS (AlgZ) sequence lacks a conserved H box, it has been postulated that FimS may not undergo autophosphorylation, although it may still be able to transfer a phosphate group to AlgR (19). It will be of interest to determine to what extent the roles of algB-kinB system and fimS-algR system overlap in control of the virulence factors in this opportunistic pathogen. | 9,102.8 | 1997-07-18T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Network games with dynamic players: Stabilization and output convergence to Nash equilibrium
This paper addresses a class of network games played by dynamic agents using their outputs. Unlike most existing related works, the Nash equilibrium in this work is defined by functions of agent outputs instead of full agent states, which allows the agents to have more general and heterogeneous dynamics and maintain some privacy of their local states. The concerned network game is formulated with agents modeled by uncertain linear systems subject to external disturbances. The cost function of each agent is a linear quadratic function depending on the outputs of its own and its neighbors in the underlying graph. The main challenge stemming from this game formulation is that merely driving the agent outputs to the Nash equilibrium does not guarantee the stability of the agent dynamics. Using local output and the outputs from the neighbors of each agent, we aim at designing game strategies that achieve output Nash equilibrium seeking and stabilization of the closed-loop dynamics. Particularly, when each agents knows how the actions of its neighbors affect its cost function, a game strategy is developed for network games with digraph topology. When each agent is also allowed to exchange part of its compensator state, a distributed strategy can be designed for networks with connected undirected graphs or connected digraphs.
Introduction
Game theory has various applications to the control of multiagent systems including smart grids, optical networks, and mobile sensor networks; see for example, Wang et al. (2014); Stanković et al. (2012); Pavel (2006). In these applications, the agents try to minimize local cost functions that depend on actions of their own and other players. The aim of the game is often set to seek a Nash Equilibrium (NE), i.e., no agent can gain by unilaterally changing its strategy. As far as we know, all NE seeking problems in literature use full agent states as decision variables and the sole purpose of the game strategy is to drive all the components of each agent state to the NE. These game theoretic problems have undesirable characters including the lack of privacy among agents and the restrictions on agent dynamics. First, as full agent states are used as decision variables, each agent has to know, or in the case of limited communication, observe the full states of all other agents. In this setting, it is impossible for the agents to converge to the NE while keeping some parts of their states unknown to the others. Second, in most existing works, the agents do not have independent dynamics such as in Ye and Hu (2017), or they can have simple homogeneous dynamics such as in Romano and Pavel (2019a). The recent work of Romano and Pavel (2019b) develops distributed NE seeking strategies for a class of heterogenous linear systems and defines the NE using partial local state.
Email addresses<EMAIL_ADDRESS>(Meichen Guo), c<EMAIL_ADDRESS>(Claudio De Persis) However, the agent dynamics therein take the special form of multi-integrators and at the defined NE, the part of the local state that does not explicitly appear in the NE must be zero. In summary, the existing NE seeking strategies are not applicable to engineering problems of multi-agent systems with general linear local dynamics. Motivated by these drawbacks in the existing works, we aim at solving an NE seeking problem such that, (i) if part of the local agent state is not directly involved in decision making, it can remain private from other agents; (ii) the agents can have more general and heterogeneous dynamics. However, it should be pointed out that this game formulation brings a major challenge that the outputs converging to the NE does not imply the stability of each agent. As we also assume that the agent states are not measurable, our goal is to design output feedback strategies that serve two purposes: stabilizing local dynamics and driving the outputs to the NE.
One important feature of NE seeking strategies in reference such as Lin et al. (2014); Parise et al. (2015); Koshal et al. (2016); Salehisadaghiani and Pavel (2016); Ye and Hu (2017); Deng and Liang (2019) ;De Persis and Grammatico (2019); Romano and Pavel (2019a) is that the designed strategies are distributed with respect to the underlying communication graph. The distributed game strategies proposed in Salehisadaghiani and Pavel (2016); Romano and Pavel (2019a) exploit the underlying communication network topology so that each agent estimates the actions of others using information from its neighbors. Nonetheless, in general games, each agent needs information on the actions of all other agents to determine its own action. When the size of the network is large, the computational burden of each agent can be extremely heavy. This limitation leads to research on network games. In Parise et al. (2015), distributed iterative strategies are proposed for agents to converge to an NE in network games with quadratic cost and convex constraints. Grammatico (2018) investigates the convergence of different equilibrium seeking proximal dynamics for multiagent network games with convex cost functions, time-varying communication graph, and coupling constraints. Our work focuses on NE seeking in a class of network games, since in cooperative control of large networks, it can be computational difficult or unrealistic for each agent to consider the actions of all others. Specifically, the network games considered in this work is a class of quasi-aggregative games that each agent minimizes its cost function depending on its own actions and the actions of its neighbors in the underlying network topology. If the network is connected by a complete graph, the concerned network games become general or aggregative games such as the ones studied in Koshal et al. (2016); Salehisadaghiani and Pavel (2016); Deng and Liang (2019); De Persis and Grammatico (2019); Romano and Pavel (2019a).
Another practical and important consideration in game theoretical engineering problems is the influence of external disturbances. In engineering applications, agents are often subject to external disturbances, for instance, the energy consumption demand in Wang et al. (2014) and wind pushing the mobile robots in formation control in Romano and Pavel (2019a). However, designing NE seeking strategies capable of disturbance rejection has not gained much attention other than a few works such as Stanković et al. (2012); Romano and Pavel (2019a). In this paper, the game is formulated with agents subject to deterministic disturbances generated by linear exosystems, which can be a combination of step functions and finitely many sinusoidal functions.
For a general or aggregative game setting, some recent works have investigated distributed NE seeking problems without or with simply local agent dynamics. Ye and Hu (2017) considers the NE seeking problem for agents communicating through an undirected and connected graph. Based on a consensus protocol and the gradient play approach, distributed NE seeking strategies are proposed for games with quadratic and nonquadratic cost functions. In the recent work Romano and Pavel (2019a), dynamic NE seeking strategies are proposed for single and double integrator networks subject to external disturbance modeled by linear exosystems. The cost function of each agent is a general convex cost function that depends on the actions of all the agents in the network. Under the assumption that the underlying communication graph is undirected and connected, dynamic strategies can be developed to estimate the actions of others and for the agents to converge to the unique Nash equilibrium. Reference Deng and Liang (2019) investigates an aggregative game of Euler-Lagrange systems in the presence of uncertain parameters. The network topology is assumed to be undirected and connected, and the agents are free of external disturbances. As Euler-Lagrange systems are nonlinear secondorder systems, the game strategies in Deng and Liang (2019) are proposed based on double integrators.
The distributed robust NE seeking strategies proposed in this work are inspired by methods in linear output regulation. Compared with existing works on distributed NE seeking strategy design, this paper has the following contributions. First, we define the NE using outputs of general linear systems as decision variables, which leads to strategies having a wider range of engineering applications. Second, the designed NE seeking strategies are capable of not only outputs convergence to the NE but also stabilization of agent dynamics despite model uncertainties and external disturbances. Last but not least, the proposed strategies can be applied to networks with communication digraphs, which is not the case for most existing game strategies including the ones in Ye and Hu (2017); Romano and Pavel (2019a); Deng and Liang (2019).
The rest of the paper is arranged as follows. The network game formulation and some preliminaries are given in Section 2. The distributed NE seeking strategies are developed and analyzed in Section 3. In Section 4, simulation results of the control of sensor networks are presented. Finally, some conclusive remarks are given in Section 5.
Notations. R denotes the set of real numbers. I n denotes the n × n identity matrix. For vectors x 1 , . . . , x N , col(x 1 , . . . , Given matrices A 1 , . . . , A N , blockdiag(A 1 , . . . , A N ) denotes the block diagonal matrix with A i on the diagonal.
Game formulation and preliminaries
In this section, the formulation of the NE seeking problem for a class of network games will be presented. Then, we give some preliminaries on linear output regulation that will be used in the subsequent sections. It is shown that any controller that solves the output regulation problem of the reformulated agent dynamics also solves the NE seeking problem.
Game formulation
Consider a game denoted by G(I, J i , Ψ i ) played by N agents having dynamicṡ input and strategy vector, y i ∈ Ψ i ⊆ R p i the output and decision variable, and denote µ ∈ R n i (n i +m i +q i +p i ) as the uncertainty. The disturbance w i is generated byẇ where the initial value where A i , B i , P i , C i are the known nominal parts and ∆A i , ∆B i , ∆P i , ∆C i are the uncertain parts. The uncertainty for each agent i, i ∈ I can be written as Define G c = (I, E) as the underlying communication graph of the network where I is the index set as defined above and E ⊂ I × I is the edge set. There is an edge between nodes i and j if (i, j) ∈ E, i, j ∈ I. For an undirected graph, if (i, j) ∈ E then ( j, i) ∈ E. Denote the neighbor set of agent i as N i ⊂ I. For an undirected graph, if there exists a path between every pair of nodes, the undirected graph is connected. A digraph is weakly connected if there exists an undirected path between every pair of nodes, and is strongly connected if there exists a directed path between every pair of nodes. Now, we give the definition of NE using outputs of (1) as decision variables in the network game G(I, J i , Ψ i ). In the game, each agent tries to minimize a local cost function denoted by J i , i ∈ I.
Definition 1 (NE in network games). Given a network game The local cost function concerned in this work is defined as Denote the neighbor set of agent i as N i . Assume that R i j 0, Q i j 0 for j ∈ N i , and R i j = 0, Q i j = 0 otherwise. Then, the cost function can also be written as J i (y i , y N i ). Note that since R ii > 0, for each i ∈ I, Ψ i = R p i and Φ i = R n i , the local cost function J i is strictly convex and radially unbounded in y i for all y N i ∈ Ψ N i . Then according to Başar and Olsder (1999, Corollary 4.2), there exists an NE for network game G(I, J i , Ψ i ).
Denote the partial gradient of cost function J i as Then, the pseudogradient can be written as F(y) =Ry +Q wherē Assumption 2.1. The matrixR is positive definite.
Following Facchinei and Pang (2003, Theorem 2.3.3), under Assumption 2.1, the mapping F is strictly monotone, and the game has a unique NE y * satisfying F(y * ) =Ry * +Q = 0.
Remark 2.2 (Monotonicity of F). Different assumptions on the monotonicity of mapping F have been used in existing works to guarantee the uniqueness of the NE. For example, for general cost functions, Koshal et al. (2016); Salehisadaghiani and Pavel (2016) assume the mapping to be strictly monotone, and Deng and Liang (2019); De Persis and Grammatico (2019) assume that the mapping is strongly monotone. Note that for general cost functions, the assumption of strict monotonicity is weaker than that of strong monotonicity. In this work, as the cost function of each agent is in the linear quadratic form (3), the strict monotonicity and the strong monotonicity of mapping F are equivalent.
Another assumption is made to exclude the trivial case where the disturbance w i for i ∈ I exponentially decays to 0.
Assumption 2.2. For each i ∈ I, S i has no eigenvalues with negative real parts. Now we give the formulation of the NE seeking problem for the network game G.
i ∈ I to design strategy u i such that (i) the closed-loop dynamics of all agents are stable, and (ii) the output y := col(y 1 , . . . , y N ) converge to the NE y * that satisfiesRy * +Q = 0.
In the contrary to most existing works on NE seeking, the solution of Problem 1 does not only need to drive the decision variables to the NE, but also guarantee the stability of the closed-loop systems. This objective is similar to output regulation problems where the aim of the regulator is to achieve reference tracking and/or disturbance rejection while guaranteeing that the closed-loop system is stable. In fact, we will get inspirations from output regulation for game strategies design in Section 3.
Preliminaries
First, we write the dynamics of all the agents in a stacked form. Denote m = i∈I m i ,n = i∈I n i ,p = i∈I p i , andq = i∈I q i . Use the pseudo-gradient as the regulated error, i.e., e = F(y) =Ry +Q.
In order to write the regulated error e as a function of the output y and an exogenous signal, we construct a linear exosystem that consists of the disturbance generator (2) and a constant component. Specifically, for i ∈ I, Then, the dynamics of the ith agent can be written aṡ where P i (µ i ) = [P i (µ i ) 0]. The local regulated error e i is defined as the local partial gradient described bȳ (4), v i1 is the same as w i in (2) and v i2 is constant 1. Then, we can use Q T ii v i2 to replace Q T ii in the expression of the local partial gradient e i . It should be pointed out that, adding a constant component v i2 is necessary and does not complicate the strategy design. If (2) already has a constant component, i.e. S i has an eigenvalue at 0, the addition of v i2 will not change the subsequent design of the internal model. On the other hand, if S i has no eigenvalue at 0, it is necessary to take the constant v i2 into consideration. We will show later that for the special case where the agents are not subject to external disturbances, due to the constant vector Q ii in the cost function, the controller needs to contain an integrator, which is the internal model for constant exogenous signals.
We denote (x i , u i ) for each i ∈ I as the steady state of (5) such that and the corresponding steady-state output as y i = C i (µ i )x i where x = col(x 1 , . . . , x N ). Then, seeing e i as the local regulated error, the distributed cooperative output regulation problem of (5) is to design a controller using e i such that (x i , u i ) converge to (x i , u i ) for all i ∈ I. In fact, the solution to the cooperative output regulation problem also solves the NE seeking Problem 1.
Proposition 1. Under Assumptions 2.1 to 2.2, if there exists (x i , u i ) such that (7) holds for all i ∈ I, the corresponding y = col(y 1 , . . . , y N ) is the NE of the network game G(I, J i , Ψ i ).
Assumption 2.3. For each i ∈ I, (A i , B i ) is stabilizable and
Remark 2.4. (Overall network stabilizability and detectability) As matrices R ii are positive definite for all i ∈ I, the detectability of pair (A i , C i ) is equivalent to the detectability of the pair (A i , (R ii + R T ii )C i ). Moreover, since the dynamics of the agents are decoupled, the pair (Ā,B) is stabilizable, (Ā,C) and (Ā,RC) are detectable.
Remark 2.5. (Existence of steady state) Assumption 2.4 guarantees the existence of the steady state (x i , u i ) Huang (2004, Theorem 1.9). The appendix presents some conditions guaranteed by Assumption 2.4 that will be used in subsequent sections.
To reject the disturbance generated by exosystem (4) and handle the uncertainties in the agent dynamics, an internal model is constructed for each agent i, i ∈ I. The following is a general definition of an internal model given an exosystem in the formẇ = S w.
Definition 2 (Internal model). Huang (2004, Definition 1.25) Given any square matrix S , a pair of matrices (M 1 , M 2 ) incorporates a p-copy internal model of S if where (T 1 , T 2 , T 3 ) are arbitrary constant matrices of any compatible dimensions, V is any nonsingular matrix with the same dimension as M 1 and G 1 = blockdiag(β 1 , . . . , β p ), G 2 = blockdiag(σ 1 , . . . , σ p ), where β i are square matrices and σ i are column vectors with appropriate dimensions, (β i , σ i ) are controllable pairs, and the characteristic polynomials of β i are the same as the minimal polynomial of S for all i = 1, . . . , p.
Remark 2.6. By Definition 2, as a special case of (M 1 , M 2 ), the pair of (G 1 , G 2 ) also incorporates a p-copy internal model of matrix S .
Remark 2.7. ("p-copy" Internal model) Under Definition 2, the dimension of the internal model is the dimension of the output times the order of the minimal polynomial of S . The internal model design in Isidori et al. (2003) has the same dimension but does not use the term "p-copy".
Distributed game strategies
In this section, two distributed output feedback control strategies are developed for the NE seeking problem. We first consider the case where the agents are connected by a diagraph without loops. Then, by letting the agents communicate more information with their neighbors, we relax the assumption on the information graph.
Directed communication graph
Using the internal model approach, a distributed error feedback strategy is designed in the form oḟ where η i ∈ R (n i +p i s i ) , K i = [K i1 K i2 ] is the gain matrix, ) incorporates a p icopy internal model of matrix S i . By Definition 2, the pair (M i1 , M i2 ) also incorporates a p i -copy internal model of matrix S i .
Before presenting the main result, we make the following assumption on the graph. In most existing works on distributed NE seeking strategy for games, for instance Salehisadaghiani and Pavel (2016) (2019); Romano and Pavel (2019a), the network topology is assumed to be undirected and connected. As far as we know, the strategies proposed in the aforementioned works are not applicable to games with directed communication graphs. However, it is noted that the controller (9) cannot solve NE seeking problem with undirected communication graphs. A distributed strategy that is able to handle both directed and undirected communication graphs will be presented in the next subsection by allowing the neighboring agents to exchange some additional information.
Remark 3.1. (State privacy in (9)) In controller (9), each agent i, i ∈ I, exchanges output y i with its neighbors. Note that its neighbors are not able to reconstruct the full state x i of agent i only using this output. In the case where C i I n i and p i n i , at least part of the agent state can remain private from its neighbors. Now we are ready to present the main result of this subsection.
Theorem 1 (Distributed strategy under communication digraphs). Under Assumptions 2.3, 2.4, and 3.1, the distributed strategy (9) is a solution to the NE seeking Problem 1.
Proof. Denote z i = col(x i , η i ), and z = col(z 1 , . . . , z N ). The closed-loop system can be written aṡ where v = col(v 1 , . . . , v N ) and S = blockdiag( S 1 , . . . , S N ). We use A c , P c and C c to denote the closed-loop system composed of the nominal dynamics and the controller (9), where A c is a block matrix with diagonal blocks A ci and off diagonal blocks E i j , C c is a block matrix with blocks C i j for i, j ∈ I, i j, P c =P, Q c = Q and otherwise.
According to Huang (2004, Theorem 1.31), under Assumptions 2.2 and 2.3, there exists a dynamic output feedback controller in the form of (9) such that the closed-loop system is stable and the error e converges to 0 asymptotically, if and only if matrix A c is Hurwitz, and the regulator equations have a unique solution Z for any µ in an open neighborhood of µ = 0. First, we examine the stability of the nominal matrix A c . Under Assumption 3.1, we can label the agents such that i < j if (i, j) ∈ E for i, j ∈ I. Then, A c becomes a block lower triangular matrix. For each i ∈ I, the diagonal A ci is similar to the matrix Note that there exists L i for i ∈ I such that A i − L i (R ii + R T ii )C i is Hurwitz, as the pair (A i , (R ii + R T ii )C i ) is detectable. Under Assumption 2.4 and by the definition of the internal model, we have that for all λ ∈ spec(G i1 ), Then, according to Huang (2004, Lemma 1.26), under Assumptions 2.2 and 2.3, the pair is Hurwitz. Then, for i ∈ I,Ā ci , and consequently A ci , are Hurwitz, which shows the diagonal blocks of A c are all Hurwitz. As A c is a block lower triangular matrix, it is also Hurwitz.
Next, we show there exists a unique solution Z to the regulator equations (11). As A c is Hurwitz, according to Huang (2004, Lemma 1.27), the equations X S =ĀX +BKΞ +P, has a unique solution (X, Ξ) for any matricesP andQ, wherē K = blockdiag(K 1 , . . . , K N ), Note that the matrix equations (13) can be put into the form of and the solvability of (14) means the solvability of regulator equations (11) for any µ in any open neighborhood of µ = 0. Therefore, by Huang (2004, Lemma 1.20), the distributed controller (9) solves the output regulation problem. Then, following Proposition 1, controller (9) also solves the NE seeking Problem 1.
For the case where the agents are free of disturbances or subject to constant external disturbances, the exosystem (4) satisfies spec( S i ) = {0}. Then, the controller (9) can be simplified. The distributed output feedback controller for disturbance-free NE seeking problem can be derived directly from Theorem 1.
Corollary 1. Consider an NE seeking problem defined in Problem 1 with agent dynamicṡ and cost functions (3) for all i ∈ I. Under Assumptions 2.3, 2.4 and 3.1, there exist matrices L i , K i1 and K i2 , such that the NE seeking problem has a solution in the form of (9) with Remark 3.2. (Integrator in the strategy) When the agents are not affected by external disturbances, we can design the p i -copy internal model (G i1 , G i2 ) as (0 p i ×p i , I p i ) for each agent i, i ∈ I. Note that in this case, it is still necessary to include an integrator in the controller. This is because the steady-state output y satisfyingRy +Q = 0 depends on the constant matrixQ. If the agents have dynamics (1) and the disturbance w i is a constant vector, the NE seeking strategy can use the same design as shown in Corollary 1, as spec( S i ) = {0} still holds.
General communication graph
Assumption 3.1 can be relaxed if the distributed game strategy is designed asξ whereê i = (R ii + R T ii )C i ξ i + j∈N i R i j C j ξ j , matrix L i and the pair (G i1 , G i2 ) have the same definitions as in (9). The difference between (16) and (9) is that in (16) agents exchange C i ξ i with neighbors. To rule out the case where the network contains isolated agents solving an optimization problem instead of playing games with neighbors, we have the following assumption on the communication graph.
Assumption 3.2. The communication graph among the agents is connected.
Under Assumption 3.2, the network topology can be a connected undirected graph, or a weakly or strongly connected digraph.
Theorem 2 (Distributed strategy under connected communication graphs). Under Assumptions 2.3, 2.4, and 3.2, distributed strategy (16) is a solution to the NE seeking Problem 1.
Proof. Denoting ξ = col(ξ 1 , . . . , ξ N ), ζ = col(ζ 1 , . . . , ζ N ), and z = col(x, ξ, ζ) gives the system matrix of the nominal closedloop system as Note that A c here is in the same form as A ci in the proof of Theorem 1. Therefore, under the assumption thatR is positive definite, it can be proved in the same fashion that A c is Hurwitz and the regulator equations have a unique solution. Then, applying Huang (2004, Theorem 1.31), we can prove that there exists a dynamic output feedback controller in the form of (16) that stabilizes the closed-loop system and drives the error e to zero asymptotically. Hence, following Proposition 1, (16) also solves the NE seeking problem.
By allowing the agents to exchange the additional information C i ξ i with neighbors, we can relax the assumption on the communication graph in Theorem 1. In (16), the ξ i -subsystem can be seen as an observer for the state x i and the ζ i -subsystem is the internal model for the exosystem (4). As the agents exchange C i ξ i instead of full state estimation ξ i , similar to the arguments in Remark 3.1 the agents maintain some privacy.
Similar to the previous subsection, we can derive a corollary for a disturbance-free NE seeking problem under general communication graph from Theorem 2.
Corollary 2. Consider an NE seeking problem with agent dynamics (15) and cost functions (3) for all i ∈ I. Under Assumptions 2.3 and 2.4, there exists matrices L i , K i1 and K i2 , such that the NE seeking problem has a solution in the form oḟ
Simulation results
In this section, the proposed NE seeking strategies are applied to connectivity control of sensor networks studied in Stanković et al. (2012). The network is composed of mobile robot agents to be positioned at optimal sensing points while keeping good connections with selected neighboring agents. In this example, we consider mobile robots modelled bẏ where x i1 and x i2 denote the position and velocity of each agent, respectively, c i > 0 is the friction parameter. The decision variable is the position of each agent, i.e., y i = x i1 . The cost function is defined as for each i ∈ I, where r i is the objective position of each agent. Then, by converging to the NE, the agents compromise between the individual objective of moving to position r i and the collective objective of maintaining the connectivity with their neighbors.
In the simulation, the sensor network is composed of 5 mobile robots subject to disturbance generated by the exosysteṁ which is a class of sinusoid signals having frequency π/10. The friction parameter is set as c i = 0.2. The initial positions of the agents are (0, 0), (1, 1), (1, −1), (2, 1), (2, −1), respectively. First, we consider a network connected by a digraph illustrated in (a) of Figure 1 Figure 2, the initial positions of the agents are denoted by circles and the NE is denoted by a collection of crosses. Figure 3 illustrates the regulated errors e i of the agents in x and y axes.
The second case is when the underlying graph of the agents is undirected as illustrated in Figure 1
Conclusion
Motivated by engineering applications, this paper considers output NE seeking problems in a class of network games with uncertain linear agent dynamics subject to external disturbances. The main challenge is to design a strategy capable of both driving the agent outputs to the NE and stabilizing the closed-loop dynamics. Other difficulties stem from the uncertainties in the dynamics, the external disturbance, and the relaxed assumptions on the communication graphs. By over coming these difficulties, our proposed game strategies have a wider range of applications to engineering multi-agent problems. Future work may consider similar output network games with general cost functions and/or nonlinear agent dynamics.
Appendix
Some insights regarding Assumption 2.4 Under Assumption 2.4, it can be proved that for any nonsingular matrix D i ∈ R p i ×p i and any λ ∈ spec(S i ) {0}, To prove this, we denote Then, F i ∈ R (n i +p i )×(n i +p i ) is nonsingular, rank F i = n i + p i , H i ∈ R (n i +p i )×(n i +p i ) , and | 7,309.4 | 2019-12-01T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Dorsal Hand Vein Analysis For Security Systems
: Biometric systems such as the fingerprint based, iris based or face recognition based are being widely used for personal verification and identification. The systems based on the vein pattern analysis is the most recent approach falling in this category. It uses the large network of blood vessels underneath the skin as the feature for identification. This system provides contactless image acquisition and the vein pattern details can’t be forged easily, thus making is better as compared to the other techniques. In the proposed system, the translational effects are minimized by taking the region of interest (ROI) at center of hand. This is done by detection of centroid. The key features used for matching purpose are the line segments extracted using Hough Transform. Finally, the authentication of the users is proved on the basis of the matching algorithm. Here the Modified Hausdorff Distance is used for the matching purpose which gives the False Acceptance Rate (FAR) of 0.2% and False Rejection Rate (FRR) of 0.1%. Analysis is performed on the images acquired from CIR biometrics database.
only [4]. In 2009, Ajay Kumar, K. Venkata Prathyusha proposed the use of knuckle tips as the key feature for identification. It had a disadvantage that displacement of the hand may sometime not cover the kuckle tips [5]. Ricardo Janes, Augusto Ferreira Brand in 2014 used the curvelet transform for feature detection which introduced some losses in the image [6]. Later in 2015, M. Rajalakshmi, R. Rengaraj , used the delaunay's principle for finding vein ending and bifurcation points as the key features. This principle is difficult to implement if there are less number of points. This fails in case the extracted veins do not have huge number of points extracted [7]. On the other hand, the proposed system uses the line segments for authentication which give more accurate results as compared to previous work [8].
II. PROPOSED SYSTEM This paper presents an approach to authenticate the individuals based on their dorsal hand vein pattern. This approach utilizes the dorsal hand vein images which are acquired form a low cost, contactless Near Infrared (NIR) camera. The architecture of the system is presented below: (fig 1)
A. Image acquisition
Image acquisition can be done in two ways. Either by Far Infrared (FIR) imaging or Near Infrared (NIR) Imaging. FIR light having wavelength in the range of 400nm to 700nm works on the principle that the veins have higher temperature than the surroundings and a thermal imaging camera can be used to get the image. But this systems had a disadvantage that when used in the open region, it can introduce noise in the image and thus requires more preprocessing. The NIR image acquisition technique has number of advantages over the other techniques. NIR light having wavelength of 700nm -900nm can transmit through the skin upto 3mm depth. The veins have deoxygenated blood in them and the NIR light is completely absorbed by the blood. Due to this, the veins appear to be darker than the rest of the region in the image taken form NIR camera. This makes the veins clearly visible as dark foreground and skin as light background. Here the NIR imaging is used. Since such a process is costly and general webcam doesn't give the good quality images suitable for analysis. The images have been taken from a free web source CIR biometrics database. It consists of 50 images of different users. Fig 2 shows one of the database images.
B. Preprocessing and extraction of Region Of Interest (ROI)
The captured image is converted to a grayscale image. This is important as the gray images can be manipulated easily at the later stages. Further the histogram equalization is performed in order to improve the contrast of the image. This helps in getting the details accurately and more clearly. The images acquired from the NIR camera may not necessarily have zero noise level. In order to extract a clear vein pattern, noise removal is required. A median filter is employed for this purpose. Then thresholding is done in order to get a clear view of background and foreground. This is done by Otsu method. It is a global thresholding method which computes a threshold value that is used to transform a grey image to the binary image. Fig 4 shows the result of thresholding. Once the preprocessing is done, the quality of the image improves but the vein pattern is still surrounded by many faint white regions. In order to obtain a better vein pattern, separation of the vein pattern from the background of the image is necessary. For this, the global thresholding by 'Otsu' method has been used. It works by creating a binary image by replacing all the values above a threshold with 1s and setting all other values to 0s. This creates a binarized image. Fig 5 shows the binarized veins of selected region. After binarization, the vein may appear thick and it may be difficult to extract the required features. In order to obtain the clear features for matching purpose, skeletonization has been applied. This extracts the skeleton of the vein patterns successfully and preserves the shape of the vein patterns. This produces a single pixel wide skeleton which is easy to analyze as shown in fig 6.
D. Matching algorithm
In order to verify the performance of the system, the generated line segments are matched using the Modified Hausdorff Distance algorithm (MHD). It is a measure for comparing the similarity of two point sets. MHD works by computing the forward and backward distance between the two point sets and gives the output as the minimum of both. Here the threshold for MHD is set to 10. If the distance between the two pointsets is less or equal to 10, then it indicates that the user is the authorized one. This is implemented using a GUI as shown below:
III. CONCLUSION
The Dorsal Hand Vein system thus proposed has an advantage that the region of interest is selected around the centroid which minimizes the translational effects making it more stable. Future work involves working on a larger database and considering the factors affected by age and various diseases. Minimization of the rotational effects can also be considered for improving the performance of the system. | 1,472.8 | 2017-08-31T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Reconstructing partonic kinematics at colliders with Machine Learning
In the context of high-energy physics, a reliable description of the parton-level kinematics plays a crucial role for understanding the internal structure of hadrons and improving the precision of the calculations. Here, we study the production of one hadron and a direct photon, including up to Next-to-Leading Order Quantum Chromodynamics and Leading-Order Quantum Electrodynamics corrections. Using a code based on Monte-Carlo integration, we simulate the collisions and analyze the events to determine the correlations among measurable and partonic quantities. Then, we use these results to feed three different Machine Learning algorithms that allow us to find the momentum fractions of the partons involved in the process, in terms of suitable combinations of the final state momenta. Our results are compatible with previous findings and suggest a powerful application of Machine-Learning to model high-energy collisions at the partonic-level with high-precision.
Introduction
Thanks to recent technological advances and increased computational power, Machine Learning (ML) has taken by storm our everyday life. Applications of ML cover fields as diverse as image and speech recognition, automatic language translation, product recommendation, stock market prediction and medical diagnosis, to mention some examples. High-energy physics has not remained indifferent to the opportunities offered by these techniques. In the last years several applications have been developed, particularly in regards to data analysis. Novel jet clustering algorithms that use improved classification to identify structures [1], reconstruction of the Monte-Carlo (MC) parton shower variables [2], and reconstruction of the kinematics [3] are just some of the explored uses. In particular, the high luminosity upgrade of the Large Hadron Collider (LHC) and the upcoming Electron-Ion Collider (EIC) are feeding the interest of the community in ML 1 . From a theoretical perspective there has been progress in the calculation of higher-order scattering amplitudes assisted by ML algorithms [4] and, in phenomenology the NNPDF collaboration has pioneered the determination of the partonic structure of hadrons [5][6][7][8][9][10][11][12].
The successes of the perturbative expansion of Quantum Chromodynamics (QCD) to describe processes involving hadrons lies in the factorisation of the physical observables into hard (perturbative, process-dependent) and soft (non-perturbative, universal) terms [13]. The former describe the interaction between elementary particles while the latter encode all the information concerning non-perturbative physics, i.e., the description of the partons inside the hadrons before the interaction and their posterior hadronisation into detected particles. For these, only the scale evolution can be determined once they are known at some other scale, and thus must be obtained from data through global fits 2 .
The simplest description of a hadron is that of a collection of partons moving in the same direction. The probability of finding a particular parton a in a hadron H carrying a fraction x of its momentum is given by the parton distribution function (PDF) f H/a (x, µ), when the hadron is explored at scale µ. After the hard interaction all outgoing coloured particles will hadronise; the probability of a parton a to fragment into a hadron H with a fraction z of its original momentum is described by the fragmentation function (FF) D a/H (z, µ). This collinear picture is the best explored and in this framework several sets of PDFs and FFs have been extracted using standard regression techniques (e.g. [15][16][17][18][19]), MC sampling (e.g. [20,21]) and MC sampling with neural networks (e.g. [5]).
In order to perform a meaningful calculation, the hard cross-section must be convoluted with the PDFs and/or FFs, over the corresponding momentum fractions of the partons. In the inclusive deep inelastic scattering (DIS) process, where a lepton and a parton inside a hadron interact by exchanging momentum Q 2 ≥ 1 GeV 2 , measuring the scattered lepton (and/or final hadrons) provides the full kinematics of the event. Unfortunately, in protonproton (p + p) collisions the situation is not so simple. One has to estimate the momenta of the initial partons (that enter in the evaluation of the PDFs) using the measured momenta and scattering angles of the final state particles. Depending on the process and the characteristic of the detectors, it can become a complicated task. Despite its inherent complexity, it is of the utmost importance in some situations. For example in the case of asymmetric proton-nucleus (p + A) collisions, particles created in the the backward (nucleus-going) direction are linked to initial partons in the nucleus with low-x, and those in the forward (proton-going) direction are related to partons in the nucleus with large-x. Depending on its exact value, one could have an enhancement or a suppression of the nuclear PDF w.r.t. the free proton one. Knowing the region of the detector associated with the kinematics of interest for a given process is also relevant for the efficient design and construction of the detectors [22]. The proper mapping of the measured kinematics onto the partonic level is crucial for a correct evaluation of the cross-sections and interpretation of the perturbative calculations. This can be done analytically at leading order (LO) for processes involving few particles, but as one considers higher orders the emission of real particles makes it hard to fully determine the kinematics, and normally phenomenological approximations are used.
In the present work, we aim to use ML to determine the relation between the measurable four-momenta of the final particles and the parton-level kinematics. In particular, we focus on p + p collisions with one photon plus one hadron in the final state, computed using QCD and Quantum Electrodynamics (QED) corrections. This process has already been identified as an interesting observable at the Relativistic Heavy-Ion Collider (RHIC) [23]. Our goal is to obtain the functions that, depending on the four-momenta of the photon and hadron, give x i (the fraction of momentum of the proton i carried by the parton coming from it, i = 1, 2) and z, the fraction of energy of the parton coming from the hard interaction that is taken by the hadron (in our analysis a pion).
This article is organised as follows. In Sec. 2 we describe the framework used to implement the MC simulation of hadron-photon production, with special emphasis on the isolation prescription (Sec. 2.1). Relevant phenomenological aspects of the process are discussed in Sec. 3. The distributions w.r.t. different variables are presented in Sec. 3.1, with the purpose of identifying the most probable configurations. We also explore the correlations between different measurable variables and the partonic momentum fractions in Sec. 3.2. In Sec. 4, we detail the implementation of reconstruction algorithms based on ML to approximate the partonic momentum fractions using only measurable quantities. Finally, we discuss the results and comment on potential future applications of our methodology in Sec. 5.
Computational setup
From the theoretical point of view, the calculation relies on the factorization theorem to separate the low-energy hadron dynamics (i.e. the non-perturbative component embodied into the PDFs and FFs) from the perturbative interactions of the fundamental particles. This approach is valid in the high-energy regime, under the assumption that the typical energy scale of the process is much larger than Λ QCD ≈ 900MeV. The process under consideration is described by (2.1) and the differential cross-section is given by where {a i } denote the possible flavours of the partons entering into the fundamental highenergy collision. f H i /a j (x, µ I ) is the PDF of the parton at the initial state factorization scale µ I , and D a j /h (z, µ F ) is the FF of the parton at the final state factorization scale µ F . The partonic cross-section, dσ, depends on the kinematics of the partons as well on the factorization and renormalization scales (µ R ) and can be computed using perturbation theory. It is worth appreciating that we consider all the partons to be massless.
In Eq. (2.2) we consider the photon as a parton, i.e. a i ∈ {q, g, γ}. Namely, we rely on the extended parton model to include mixed QCD-QED corrections in a consistent way [24][25][26][27][28]. However, we will assume that the fragmentation of a photon into any hadron is highly suppressed w.r.t. the same process initiated by a QCD parton. This implies that we neglect D γ/h and a 3 is always a QCD parton (quark or gluon). Also, since we are looking for a photon in the final state, we can write which leads to where a 4 is a QCD parton. By rewriting Eq. (2.2) in this way, it is possible to separate two different mechanisms originating photons in the final state 3 . The first term describes the direct production of an observed photon in the partonic collision; in the second term the observed resolved photon is generated from a non-perturbative process initiated by the parton a 4 . It is worth appreciating that these contributions are not individually distinguishable; however the latter can be suppressed by applying adequate prescriptions. By realising that the resolved component appears in the context of hadronisation, the photon being produced together with a bunch of hadrons, one can exploit this signature to enhance the direct photon: it is the motivation for introducing isolation prescriptions. By selecting mainly those events that contain photons isolated from hadronic energy, the total cross-section can be approximated to i.e. neglecting the resolved component and summing over all QCD-QED partons. The partonic cross-section dσ a 1 a 2 →a 3 γ incorporates the isolation prescription and is described in greater detail in Sec. 2.1.
We can now move to the discussion of how to include the QED corrections. The next-toleading order (NLO) pure QCD corrections for this process were computed in Refs. [23,29]. Since in this case we are dealing with mixed QCD-QED corrections, we have to consider the two couplings involved in the perturbative expansion. From the computational point of view, we can profit from the Abelianization techniques to directly obtain QED contributions from the QCD ones [26,27,[30][31][32]. Given that the energy scale of the process is roughly O(10 GeV), we have α S ≈ 0.12 and α ≈ 1/129. This means α ≈ α 2 S , indicating that the LO QED corrections have the same weight as the NLO QCD ones. Therefore, the dominant contribution is given by the partonic channels qq → gγ and qg → qγ at O(α S α), i.e.
with S 2 the measure function containing the definition of the kinematical selection cuts for the 2 → 2 sub-processes. We have then to include O(α 2 S α) and O(α S α 2 ) contributions, associated to the partonic channels and qγ → qγ , qq → γγ , (2.8) respectively. In this way, the corrections to the partonic cross-section are given by [33] dσ ISO,(1) whereŝ is the partonic center-of-mass energy and r denotes the extra parton associated to the real radiation correction. |M (0) | 2 and |M (1) | 2 are the squared matrix-elements for the tree-level and one-loop corrections, respectively. In these expressions, S 3 represents the measure function that implements the experimental cuts and the isolation prescription for the 2 → 3 sub-processes.
Since we are dealing with higher-order corrections, singularities will appear in the calculation. The LO QED is given by a (finite) Born level process. However, the NLO QCD corrections involve both ultraviolet (UV) and infrared (IR) singularities that must be regularized and cancelled to get a physical result. The regularization was done using Dimensional Regularization (DREG) [34][35][36][37]. The virtual corrections were computed starting from the one-loop QCD amplitude for the process 0 → qqgγ, removing the UV poles through the renormalization in the MS scheme. In order to cancel the IR singularities, we relied on the subtraction formalism [38][39][40][41][42], splitting the real phase-space in regions containing only one kind of IR singularity. When combining the real and the virtual corrections, some of the IR divergences associated to final state radiation (FSR) cancel by virtue of the KLN theorem [43,44]. But to achieve a full cancellation, counter-terms were added to remove the remaining initial-state and final-state contributions absorbed into the PDFs and FFs, respectively. In this way, the master formula for the partonic cross-section at NLO QCD + LO QED accuracy is symbolically given by where dσ ISO,cnt,(I) a 1 a 2 →a 3 γ and dσ ISO,cnt,(F ) a 1 a 2 →a 3 γ are the initial and final-state IR counter-terms, respectively. Here, C UV a 1 a 2 →a 3 γ is the renormalization counter-term for the partonic process a 1 a 2 → a 3 γ in the MS scheme 4 .
Isolation prescription and other assumptions
In order to suppress events with photons originated from the decay of hadrons, it is necessary to implement an isolation prescription. The idea behind most of the strategies available in the literature consists in quantifying the amount of hadronic energy surrounding a wellidentified photon, and rejecting events with more hadronic energy than a certain threshold. Whilst most of the prescriptions work nicely at LO, not all of them are infrared safe. For instance, it is known that choosing a fixed cone eliminates events that play a crucial role in the cancellation of IR singularities. Thus, special care is needed in the implementation of these methods 5 .
In this work, we rely on the smooth cone prescription introduced in Ref. [49]. Its main advantage is that it suppresses the resolved component without preventing the emission of soft/collinear QCD radiation, which makes it IR-safe and fully suitable for higher-order calculations. In the first place, we fix a reference point in the rapidity-azimuthal plane (η 0 , φ 0 ), and define the distance with (η j , φ j ) the angular coordinates of the parton j. Once we identify a photon in the detector, we trace a cone of radius R around it and look for QCD partons inside. If no QCD radiation lays inside the cone, the photon is isolated. If not, we identify the QCD partons inside the cone, {a j }, and measure their distance to the photon following Eq. (2.11). Then, for a fixed r ≤ R, we calculate the sum of the hadronic transverse energy according to We want to restrict E T by imposing an upper bound, thus limiting the amount of hadronic energy surrounding the photon. In the fixed cone prescription, this limit is a constant. However, for the smooth prescription, we introduce an arbitrary smooth function ξ(r) satisfying ξ(r) → 0 for r → 0, and require E T (r) < ξ(r) for every r < r 0 . Only if this condition is fulfilled, the photon is isolated; otherwise, the event is rejected.
The experimental implementation of this criterion requires a very high angular resolution, something that is usually not achievable in practise. This is one of the reasons because most of the current experiments still rely (mainly) on the fixed cone prescription. Fortunately, the difference between both approaches can be neglected for several relevant observables [46,47]. In any case, technological improvements in detector science will certainly reduce the experimental limitations in the near future.
Finally, let us mention one further detail about the implementation. We will neglect the partonic channel qq → γγ in Eq. (2.5), which would imply the introduction of the fragmentation D γ/h . From the point of view of perturbation theory, this fragmentation can be interpreted as a collinear electromagnetic splitting γ → a+X, with a a QCD-parton that undergoes hadronization to generate the observed hadron h. Performing a naive counting, this contribution is O(α 3 ) and turns out to be sub-leading w.r.t. the NLO QCD + LO QED terms studied in this work 6 .
Phenomenological results
Using the formalism explained in the previous Section, we calculated the unpolarized crosssection via a code that uses adaptive MC integration. In this program, the different contributions to 2 → 2 and 2 → 3 processes are computed independently, and kinematic cuts can be imposed. In particular, we reproduced the experimental cuts corresponding to the PHENIX detector, i.e.
with η the rapidity of the particles measured in the hadronic center-of-mass frame. On top of that, we require |φ h − φ γ | > 2 to retain those events with the photon and hadron produced almost back-to-back. We perform the simulations at centre-of-mass (c.m.) energy ( √ S CM ) 200 GeV for RHIC and at √ S CM = 13 TeV for LHC Run II, keeping in this case the same cuts described in Eq. (3.1). Since the pion is the lightest hadron and is produced more copiously, we restrict our attention to the case h = π + . Additionally, we considered the scenario for Tevatron at √ S CM = 1.96 TeV because it involves proton-antiproton (p+p) collisions. In principle, this process might exhibit a different dependence on PDFs and FFs, compared to p + p collisions.
Regarding the non-perturbative ingredients of the calculation, we used the LHAPDF package [50,51] to have a unified framework for the PDF implementation. We relied on the NNPDF4.0NLO [12] and NNPDF3.1luxQEDNLO [52][53][54][55] parton distributions for the pure QCD and mixed QCD-QED calculations, respectively. In both cases, we use the set 0, which corresponds to an average over the different replicas. For the fragmentation functions, we used the DSS2014 set at NLO accuracy [18,56]. Also, we evolve the QCD and QED couplings using the one-loop RGE with the initial conditions α S (m Z ) = 0.118 and α(m Z ) = 1/128.
Finally, we fixed the factorization and renormalization scales to be equal to the average transverse momenta of the hadron and the photon, i.e.
Regarding the implementation of the smooth isolation criteria, we used the function where E γ T is the transverse energy of the photon and r 0 = 0.4. As mentioned before, the only requirement for ξ(r) is that ξ(r) → 0 smoothly, and Eq. (3.3) fulfils this condition.
One-dimensional distributions
Since we are looking at the process p + p → π + γ + X, are the experimentally accessible variables measured in the c.m. system. Notice that we consider only the difference of the azimuthal angles, because the problem has rotational symmetry around the collision axis. Moreover, it turns out that cos(φ π − φ γ ) is a variable often used by experimental collaborations [57]. Figure 1. Unpolarized cross-section for the production of one photon plus one pion as a function of the transverse momentum of the pion (left) and the photon (right), respectively. We considered the selection cuts described in the previous section, for LHC Run II and RHIC, respectively.
In Figures 1, 2, 3, 4 and 5 we present the single differential cross-section as a function of the variables V Exp for RHIC and LHC Run II. Our predictions are shown for LO QCD (dotted red), NLO QCD (solid black) and NLO QCD + LO QED (dashed blue), considering the default scale choice defined in Eq. (3.2). In first place, we study the pion (p π T ) and photon (p γ T ) transverse-momentum spectrum in Fig. 1. The cross-section increases for higher c.m. energies and the impact of the QED corrections also becomes more sizable.
The distribution in p π T falls faster than the p γ T -spectrum, mainly because of the convolution with the FFs. In fact, the experimental cuts imposed ensure an important contribution of events with close-to-Born kinematics. In this case, p γ T is associated to the transverse momentum of the parton c which fragments into a pion with momentum fraction z. Since the FFs tend to favour the region with z ≤ 0.2 [58], the suppression observed in Fig. 1 can be understood. Next, we present the distributions in the rapidities (Fig. 2) and the azimuthal variable cos(φ π − φ γ ) (Fig. 3). In both cases, we show a comparison between RHIC and LHC Run II. For the rapidity distribution, we appreciate a significant NLO QCD correction, although the added LO QED effects are very small. Regarding the azimuthal spectrum, we can observe in Fig. 3 a peak in the back-to-back region (i.e. cos(φ π − φ γ ) = −1), with a fast suppression for configurations beyond Born-level kinematics. Besides the distributions w.r.t. the experimentally-accessible quantities, we can compute the differential cross-section as a function of the partonic momentum fractions, x 1 , x 2 and z. For p + p collisions we consider only the distributions w.r.t. x 1 due to the symmetry of the system. In what follows, x and x 1 will be used interchangeably. The corresponding plots are shown in Fig. 4, for x = x 1 (left) and z (right). We notice that the experimental cut in p γ T induces a restriction on the maximum value of x involved in the collision. In fact, using a LO approximation, we get beyond this value, the cross-section is drastically suppressed. For RHIC and LHC Run II, it translates to x Max ≈ 0.03 and x Max ≈ 0.001, respectively. Thus, we will use this information to restrict the x-range in the correlation analysis presented in the next section. In this way, we will avoid dealing with regions with a negligible amount of events. Notice that the higher the energy of the process, the lower the x-range accessible by the experiment. Regarding the dependence in z (right panel of Fig. 4), it reaches almost the endpoint region (i.e. z = 1) with a reasonable amount of events. The fact that we impose p π T ≥ 2 GeV translates into a lower bound for z given by which corresponds to z Min ≈ 0.004 and z Min ≈ 0.0001 for RHIC and LHC Run II, respectively. Opposite to the case of the x-distribution, here the higher the energy of the process, the wider the accessible z-range. It is worthwhile noticing that the FFs used in this work do not include in the fit data with z ≤ 0.05 and extrapolations into that region are most likely unreliable. The distribution present a peak, located at z Peak ≈ 0.35 for RHIC (z Peak ≈ 0.25 for LHC Run II). The position of the peaks depends on the explicit functional form of the PDFs and the FFs.
To conclude this section, we study the case of p+p collisions at Tevatron, with √ S CM = 1.96 TeV. In this case, the symmetry between x 1 and x 2 is broken, since x 1 (x 2 ) corresponds to the momentum fraction of a parton inside a proton (antiproton). In Fig. 5 we present the distribution for x 1 (left) and x 2 (right). We can appreciate that the distribution in x 2 reaches a peak around x 2 ≈ 0.01 and then falls faster than the x 1 -distribution. We know from previous studies that the partonic channel gg is dominant [23], and thus we expect the differences to take place in the qq and qQ channels. This also has an impact when studying the x 1 vs x 2 correlations, as we will show in the next subsection.
Correlations with the partonic momentum fractions
Since one of the main goals of this work is to reconstruct the partonic kinematics starting from experimentally accessible quantities, it is useful to first study the correlations among the different variables. This helps us to prioritize certain ansatzes depending on their functional form, in such a way that we capture the leading behaviour when exploring linear models. In the following, we restrict the discussion to RHIC kinematics (with the cuts defined in the previous section).
We start by considering the relation between x = x 1 and the transverse momentum of the particles in the final state. In Fig. 6, we present the correlation between x 1 and p γ T (left column) and p π T (right column). Each bin contains the corresponding integrated cross-section at LO QCD (upper row) and NLO QCD + LO QED (lower row) precision. Notice that the inclusion of higher-order corrections leads to a broadening of the patterns, originated by the presence of events in previously empty bins due to an extended phasespace. This is a general behaviour that also manifests when studying the correlations of other variables. Events with low p γ T are associated with low x 1 , and there is a somehow linear correlation between these variables. Events with low p π T are mostly uniformly spread in the region of x 1 ∈ [0.2, 0.6]. This behaviour is expected from the fact that the photon is originated from the partonic event (its energy is directly related to the energy of the colliding partons), whilst the pion comes from an hadronization (which implies the convolution with the FF and the consequent spreading of the distributions).
Next we move on to analyze the correlation between x = x 1 and the rapidities of the particles in the final state. It is important to highlight that the analysis here does depend on the momentum fraction being used, i.e. x 1 or x 2 , since the rapidity introduces an asymmetry in the direction of the colliding particles. We show, in Fig. 7, the plots of x 1 vs. η γ at LO QCD (left) and NLO QCD + LO QED (right), respectively. Similar results were found when considering x 1 vs. η π and are thus not presented here. Since the distributions are rather flat for −0.3 ≤ η ≤ 0.3, we find that most of the events are uniformly distributed for x 1 ∈ [0.2, 0.5]. Finally, notice that below x 1 ≈ 0.2, the cross-section falls steeply as a consequence of the imposed kinematical cuts, and the bins are empty. The analogous results on the z dependence are presented in Fig. 8, the upper (lower) row corresponding to the LO QCD (NLO QCD + LO QED) contributions. On the left column we show the correlation between z and p γ T , and between z and p π T on the right column. The former seems to be slightly negative, i.e. smaller values of z tend to be favoured in events with higher p γ T , while the latter has a concentration of events in the low p π T region with z ≥ 0.4. Also, as expected, events with high p π T require higher values of z since the amount of partonic energy is limited by the cut p γ T ≤ 15 GeV. The correlation between z and the rapidities of the final state particles shows a rather flat dependence on η, as depicted in Fig. 9 for the case of η γ (similar plots were obtained when considering η π ).
Then, let us consider the correlations with the azimuthal variable cos (φ π − φ γ ) in Fig. 10. Of course, the contributions associated to the Born kinematics are restricted to the first bin because cos (φ π − φ γ ) = −1 (i.e. the pion and the photon are produced back-to-back). The remaining bins are heavily suppressed, since they only receive contributions from the real radiation. We see that the events are strongly concentrated in the medium and low-x region without a clear trend or dependence w.r.t. cos (φ π − φ γ ). For z, the distribution spreads over more bins, and there is a subtle trend to favour events with a bigger azimuthal separation (smaller values of − cos (φ π − φ γ )) and slightly lower values of z. Finally, we analyze the correlation between x 1 and x 2 for p+p collisions. In Fig. 11, we show the correlation plots at LO QCD (left) and NLO QCD + LO QED (right) accuracy, for RHIC kinematics. As expected, there is a compact region containing events at LO, reflecting the kinematical constraints of a 2 → 2 process. The events are concentrated in the low-x region and show a strong positive linear correlation between x 1 and x 2 : this reflects the fact that it is more probable to have events in the back-to-back region, in agreement with Fig. 10. When introducing higher-order corrections, the real emission phase-space gets enlarged and the distributions are spread. In any case, the positive correlation between x 1 and x 2 remains, with an strong concentration of events in the middle and low-x region. Also, it is worth appreciating that the NLO real corrections are not enough to enhance the number of events with rather different values of x 1 and x 2 . This is, in part, a consequence of the kinematical cuts that favour central events rather than highly boosted ones. To conclude this section, let us comment on the importance of the study of correlations. Since we want to reconstruct the partonic momentum fractions by using the measurable variables, it is important to know which ones are the most relevant. From the previous discussion, we expect that x strongly depends on p γ T (positive correlation) but not on the other variables. Analogously, z exhibits a negative correlation with p γ T , a positive one with p π T and a slight dependence on − cos (φ π − φ γ ). This knowledge will be applied to the construction of a basis of functions for determining x and z in the next section.
Reconstruction of parton kinematics
We now focus on our main goal, which is to determine the partonic variables x 1 , x 2 and z in terms of the measured momenta of the final state particles. At LO this is fully determined by energy-momentum conservation, and thus the LO case will serve as control. The real challenge appears at NLO, where real emissions prevent a straightforward determination of closed analytic formulae: this is what we will attempt to approximate using ML 7 .
In supervised ML, we have an initial set of data (the training set) and we want to map it into another known set (the target). Each entry in the training set is a vector of dimension d, with d the number of variables (features) that the target depends upon. We also assume that there is some underlying function, the so-called target function, that connects the two; the task of a ML algorithm is to find a good estimation of this function. This estimator, in turn, depends on a set of parameters that is determined by minimising a function (the cost function) that measures some distance between the prediction of the estimator and the actual targets. As a last step, one takes another set of data with corresponding labels (test set) and compares how well the estimator does over it. To prevent the estimator from performing well over the training data set but poorly over the test set (overfitting), the cost function includes also some parameters to control the trade off between a low training cost and a low test cost. The total number of regularization parameters depend on the specific method used, and the optimal value/s have to be found by picking the one/s that minimize the test cost function.
Armed with these basic concepts, we first discuss the generation of our input and target sets using the outputs of our MC code. After that, we present results obtained through the application of supervised ML for estimating x ≡ x 1 and z at LO QCD and NLO QCD + LO QED accuracy. For the purpose of the present analysis, we explore three models: a Linear Model (LM), a Gaussian Regression (GR) and the Multi-Layer Perceptron (MLP) algorithm based on neural networks. These models have been implemented in Python using the scikit-learn library [60].
Construction of the training data sets
The training and test sets were generated with the MC code used and described in the previous sections. As was mentioned already, it deals independently with each term of the computation (LO, NLO real radiation, NLO virtual terms, NLO counter-terms). This poses two difficulties when generating the training set for feeding the ML algorithms. On the one hand, only the LO calculations are finite on their own; for the NLO cross-section, we have to combine all terms (real, virtual and counter-terms) to have a meaningful finite quantity. On the other hand, by the same nature of the MC integration, no two identical points are generated in the sampling, which in turn spoils the fully local cancellation of the divergences. Instead, one has to split the different variables into bins and sum over all contributions 7 Doing a formal description of the ML methods that we used is beyond the scope of this work, and would take much more than a simple article. Moreover much literature is available on the topic (see e.g. [59]), so we will leave out such a discussion and mention just a few basic concepts needed in the rest of the section.
entering each of them. If a sufficient number of points is sampled, the divergences cancel and we obtain the finite cross-section per bin. This is a common feature of MC integration, and many codes provide routines that take care of this for one-dimensional binning. In our case we are interested in a more differential observable, so that we had to generate a large number of points to meet this condition. Moreover, not all sampled points pass the selection cuts, e.g. from the 10 9 points sampled we retain ≈ 30% at LO.
For the LO we can directly use the generated points, but for the NLO case we need to discretize the differential cross-section w.r.t. the external kinematical variables defined in Eq. (3.4). For this purpose, we create a five-dimensional grid by binning the variables in V Exp . Explicitly, we define 10 bins for p γ T and p π T , 5 bins for η γ and η π , and 6 bins for cos(φ π − φ γ ). The set of discretized experimentally-measurable variables is denoted as whereā denotes the mean value of the variable a in a given bin. In totalV Exp contains 15000 bins. Then, we define the cross-section per bin according to with x j,MIN (x j,MAX ) the minimum (maximum) value of the variable x in the j-th bin,x the corresponding average of x over the j-th bin and is the fully-differential hadronic cross-section as a function of the partonic momentum fractions and the experimentally-measurable variables V Exp . At LO, σ j can be straightforwardly calculated since we only need to integrate the tree-level scattering amplitude in a 2 → 2 phase-space. However, as we explained in Sec. 2, the NLO corrections include several contributions calculated with different kinematics (virtual, real, counter-terms): all of these are taken into account in dσ and integrated over their corresponding phase-space to obtain σ j 8 .
Once the grid and the discretized cross-section are defined, we use the MC code to generate three histograms per each bin in the grid. These histograms corresponds to the distributions dσ j /dx 1 , dσ j /dx 2 and dσ j /dz, respectively. So, given a point in the grid
4)
8 It is worth appreciating that binning could be avoided using a fully-local framework for computing higher-order corrections [61,62]. One of these methods is the Four-Dimensiona Unsubtraction (FDU) [63][64][65][66] based on the Loop-Tree Duality [67][68][69]. Since FDU leads to a fully-differential and finite representation of the cross-section, it constitutes a perfectly suited candidate to improve the efficiency of the analysis presented in this article.
we can define which correspond to the weighted average of the partonic momentum fractions extracted from the histograms generated with the MC code. At this stage, we can identifyV Exp as the training set and {(x 1 ) j , (x 2 ) j , (z) j } as the target one. Then, we can train the ML algorithms to find the target functions that will allow us to reconstruct the MC partonic momentum fractionsX 1,REAL ,X 2,REAL andZ REAL . To conclude this discussion, notice that the definitions given in Eqs. (4.5)-(4.7) are crucial beyond LO. In fact, for a 2 → 2 process, fixing the bin p j ∈V Exp leads to a unique value of the partonic-momentum fractions. Explicitly, we have as explained in Ref. [23]. Due to the presence of 2 → 3 sub-processes contributing to the real radiation, the value of {x 1 , x 2 , z} for a given p j is not unambiguously defined at NLO (and beyond). If we pick up an event with a fixed p j from our NLO MC generator, the real partonic momentum fractions might take all the kinematically-allowed values. However, the probability of the different outcomes is given by the differential-cross section of the event, which motivates the definitions introduced in Eqs. (4.5)-(4.7). In the following, we explain how these data sets are used with the different ML frameworks.
Linear regression
Linear methods, as the name indicates, provide the estimation of the target function as a linear combination of the input set. However, the linearity occurs at the level of the parameters and one can apply prior knowledge to construct new features upon which the target dependence is simpler. Choosing a good set of features (basis) plays an important role to achieve an accurate reconstruction.
For example, at LO we take inspiration from the exact analytical expressions given by Eqs. (4.11)-(4.13) and propose the basis (4.14) We then expect x 1 to be well reconstructed by a linear combination of the first two elements of the basis (with coefficient 1), whilst z should be mainly proportional to the last element. In Fig. 12, we show the correlation between the MC partonic momentum fractions (vertical axis) and the output of the linear regression (horizontal axis). Each bin contains the integrated cross-section at LO QCD accuracy. We can appreciate that the reconstruction is perfect, and the LM approach leads exactly to the Eqs. (4.11)-(4.13). When dealing with the NLO scenario, in principle, it should be expected an enlargement of the basis. The elements of B LO are not enough to fully capture the additional dependencies introduced by the NLO real kinematics. In fact, in Ref. [23] the authors proposed that agree with Eqs. (4.11)-(4.13) at LO, but introduce an additional dependence on the azimuthal variables at higher-orders. The study of correlations performed at NLO QCD accuracy using these expressions showed a good reconstruction of the MC partonic momentum fractions.
With this precedent in mind, we propose here to include additional functional dependencies to have a more flexible reconstruction. We start by defining a primitive set of functions in such a way that the reconstructed variables take the form The ansatz proposed in Eq. (4.19) generalizes the basis B LO and includes products of up to three kinematical variables, which gives more flexibility to fit the data. In total, there are eightyone functions in the basis, that we denominate general basis. However, as we will now explicitly see, a larger basis does not imply a better reconstruction.
If we take Eq. (4.19), with Y = {x 1 , z} we obtain the results shown in the upper row of Fig. 13. In this figure, we indicate the strength of the correlation with the integrated cross-section per bin at NLO QCD + LO QED accuracy. The coefficients ij are given in App. A. We can appreciate that the reconstruction is good in the low-x and low-z region. This is expected because the cross-section is larger in that region, so there are more data-points to perform the fit. However, the reconstruction becomes noisy and imprecise for higher values of the momentum fractions. The LM is unable to keep the functional dependencies that better approximate the real momentum fractions in regions with low number of events. For this reason, we explore a second approach. We take profit from the findings in Sec. 3.2, and distinguish different basis for Y = x 1 and Y = z. It was shown that x 1 exhibits a positive correlation with p γ T , so we remove the contributions involving K 6 = (p γ T ) −1 from Eq. (4.19). Regarding z, the conclusion of Sec. 3.2 was that it is correlated with K 6 = (p γ T ) −1 , K 2 = p π T and that also presents a mild correlation with K 5 . So, we remove the contributions that involve the primitive functions K 1 and K 7 . As a result, we propose a physically-motivated reconstruction by taking Eq. (4.19) and setting for for z. The coefficients obtained with these assumptions are presented in App. A, whilst the corresponding correlations with the real MC momentum fractions are shown in the middle row of Fig. 13. We can appreciate that the correlation is slightly better for z, but it is worse for x. Even if the physically-motivated basis includes elements that are selected according to the correlations with physical variables, it turns out that the abundance of points in a particular region of the parameter space imposes a very tight constraint in the whole fit. For z, it is not a big problem since it seems to be dominated by the ratio p π T /p γ T . However, the dependence of x w.r.t. the kinematical variables is more complicated, and a linear fit is not enough to capture it. Thus, reducing the basis does not lead to an improved reconstruction of the momentum fractions.
To conclude this discussion, let us mention that we tested the LM with another basis inspired by the LO formulae. Namely, this LO-inspired basis is given by for x ≡ x 1 and for z. In this case, the reconstruction was even worse, as can be seen in the lower row of Fig. 13. In particular, X 1,REC seems to be uncorrelated with X 1,REAL . So, we can appreciate that the approach followed in Ref. [23] was more efficient than the LM. In other words, forcing a linear combination that describes the LO kinematics and then using the same formulae for higher-orders, allows to achieve a more precise reconstruction. In the next subsections, we explore other methods that will lead to a better approximation of the MC momentum fractions in a more automatized way.
Gaussian regression
While the LM method provides a good description for the LO case, at NLO the result strongly depends on the variables used to feed the algorithm. As the larger basis seems to render a slightly better reconstruction, we could use this as a motivation to further expand our basis, e.g. by including higher-powers of its elements. However this relies on deciding i) which appropriate combinations of K i are needed, and ii) to which power it would be convenient to go. The first point was addressed in Subsec. 4.2 by constructing several bases, with different degree of success. Regarding the second point, we could try with different powers of a given basis, but this would be a cumbersome task. A more general and computationally efficient approach can be implemented by using the kernel trick (see e.g. [70,71]). In this method, the feature vector in the calculation is replaced by writing everything in terms of a function (kernel) of the dot product of the elements of the training set. In particular we use the radial basis function (RBF), defined as where x i , x j are two elements of the training set, d(x i , x j ) is the Euclidean distance between them, and l is a distance parameter (not necessarily the same for all {i, j}). The RBF has Figure 14. Correlation between the MC momentum fractions (i.e. X REAL and Z REAL ) versus the ones obtained at NLO QCD + LO QED accuracy (X REC and Z REC ). We show the results corresponding to the GR approach, using the general basis (upper row), the physically-motivated basis (middle row) and the LO-inspired basis (lower row).
the advantage of including all possible powers of the exponent, and therefore we expect a better reconstruction of the kinematic variables.
Similarly to the LM, the GR requires a set of input variables. In order to properly compare the methods, we take the same bases for both. The GR also needs the user to select the width of each Gaussian function, l, which is by default l = 1. In principle it could be different for each feature of the input set, but for simplicity we keep it featureindependent. However we did find better reconstructions when using different l for x and z. The optimal values of l for each basis can be found in Table 1 We find that, when using the most general basis, a better agreement between the reconstructed and the real data sets requires broad Gaussian functions. In addition, if we reduce the basis the GR tends to require wider Gaussian functions to achieve a good description of the data sets. Finally, we find that in the physically-motivated basis, the GR finds the best agreement by choosing l = 1 for the prediction of x and l = 1.5 for z, i.e. sharp Gaussian functions are needed meaning that a combination of these variables is enough to reproduce the full data sets.
These facts can appreciated in Fig. 14 where we present the results obtained at NLO QCD + LO QED accuracy. As expected, the inclusion of higher-order terms (higher nonlinearity) in the training set brings a significant improvement with respect to the LM, in particular for the reconstruction of x. In addition, we point out that among the three basis, in general, the reconstruction of x is harder than the z momentum fraction. The general basis can extract the information to almost determine completely a function for the prediction of the momentum fractions but with wide Gaussian functions. In contrast, the physically-motivated basis makes a good job in the determination of z but is not that accurate on the extraction of x, although it requires sharp Gaussian functions, meaning that they are well localized and determined.
To conclude this section, we appreciate that the GR method leads to a more reliable reconstruction of the MC momentum fractions, compared to the LM. The best results are obtained with a larger basis, in order to have more flexibility. Moreover, the non-linearity inherent to the GR allows to overcome the limitation of the overfitting in the low-x and low-x region that we observed in the LM, leading to a very accurate reconstruction in a wider range.
Neural Networks
Before jumping into the results of this section, let us briefly remind the reader of what is a neural network (NN). The building blocks of a NN are algorithms (called Perceptrons) used in supervised learning to decide if an input belongs into a class or not (binary classifier).
They consist of a set of input values X, that will be linearly combined by weights (W ) and independent terms B (biases), after which the sum will be transformed by the (usually non-linear) activation function f , giving an output Y : Y = f (z) with z = X * W + B. Each Perceptron mimics a neuron, and a combination of them makes a NN. The standard nomenclature labels the inputs and outputs as input and output layers, respectively. To increase the capabilities of the NN (and its complexity) one can add more neurons in between, organised in hidden layers. The activation functions connecting one layer to the next do not need to be the same, neither the number of neurons in each hidden layer. The learning proceeds in two steps. First, the NN computes the output from the inputs (feedforward). In a second step (back-propagation), it calculates the cost and then minimizes it. This can be implemented in different ways, one of the most popular being stochastic gradient descent 9 . The choice of the activation function/s and relevant parameters is highly non-trivial, and trial-and-error was required to find a configuration that could reproduce the momentum fractions. A non-exhaustive comparison of different combinations is presented in App. B, but here we limit ourselves to present the results corresponding to the parameters summarised in Table 2, which are used within the scikit-learn framework.
The results of the MPL algorithm are presented in Fig.15 for the LO QCD contribution (upper row) and the NLO QCD + LO QED correction (lower row). In the LO case the reconstruction is quite good, without reaching the level of accuracy of the LM or GR. This is a strong evidence that the complexity of the NN machinery greatly exceeds that of the task to be solved. In the NLO case, on the contrary, the reconstruction is much better than the one obtained with the LM using any basis, and similar to the GR one with the general basis (upper row of Fig. 14). The plots show an almost perfect agreement in all bins for both x and z. The largest discrepancy appears for x, which can be partially due to the higher complexity of the target function for x than for z, already suggested by the analytic LO expressions. Indeed, almost all trials performed with different methods and configurations Figure 15. Left: Comparison of the momentum fractions X REAL and X REC obtained with MPL neural networks with the parameters given in Table 2. The upper (lower) row corresponds to the LO QCD (NLO QCD + LO QED) data set. Right: same as the r.h.s but for Z REAL and Z REC . arrive to reasonable relations between Z REAL and Z REC . However for x, we have to either increase the number of elements in our basis (GR) or the number of layers/nodes (NN).
In any case, we can highlight that the MPL algorithm does not require to choose any particular basis: the complexity is translated into defining the proper architecture. This task is more suitable for automation, thus more appropriate for tackling generic physical processes regardless of the number or kind of particles involved. Whereas LM or GR could take advantage from physically-motivated parameter's choice to speed-up an accurate reconstruction, the NN framework relies mainly on computational power to reduce the problem to a black-box function.
Conclusions and outlook
In this work we have explored the reconstruction of the parton-level kinematics for the process p + p → γ + h using Machine-Learning (ML) tools. In first place, we implemented the calculation in a Monte-Carlo (MC) code with NLO QCD and LO QED accuracy. We relied on the FKS algorithm to cancel the infrared singularities, and the smooth cone isolation criteria to select those events with direct photons. This prescription is crucial to have access to cleaner information from the hard process.
Then, we studied different kinematical distributions with the purpose of identifying the regions with the largest number of events. After imposing selection cuts similar to those used by experimental collaborations, dynamical cuts were induced in the x and z distributions. These restrictions were taken into account when selecting events for analysing the correlations between experimentally-accessible quantities (p T , η and φ for the photon and pion) and the partonic momentum fraction. We realized that x strongly depends on p γ T (positive correlation) but not on the other variables, whilst z exhibits a negative correlation with p γ T , a positive one with p π T and a mild dependence with cos(φ π − φ γ ). After that, we applied ML algorithms to reconstruct the partonic variables x 1 , x 2 and z. We started by introducing a proper discretization of the multi-differential crosssection w.r.t. the set of variables {p π T , p γ T , η π , η γ , cos(φ π − φ γ )}, in order to have a reliable estimation of the higher-order corrections in each bin. For these distributions, we generated the data sets and explored three different ML reconstruction strategies: linear methods (LM), Gaussian Regression (GR) and Multi-Layer Perceptron (MLP). For the first two approaches, we introduced three bases of functions inspired by the results obtained from the analysis of two-dimensional correlations in Sec. 3.2. In all the cases, the reconstruction at LO QCD accuracy was very successful, and in agreement with the known analytical expressions. When dealing with the NLO QCD + LO QED corrections, the flexibility of the MLP approach leads to a very reliable reconstruction, achieving a better performance than the LM and comparable to the GR when using a sufficiently large basis. In particular, the LM results were highly-influenced by the abundance of data in the low-x and low-z region, leading to an unreliable fit when extrapolated outside these regions.
It is worth appreciating that the number of assumptions related to the setup of the MLP framework is rather limited, compared to the ones done for linear and Gaussian regression. In particular, we want to highlight that there was no need to introduce an specific basis of functions, which makes this approach fully process-independent and suitable for other analysis.
In conclusion, the application of ML-inspired methods (and Neural Networks in particular) is suitable to unveil the partonic kinematics at hadron colliders, including also higherorder corrections. In this way, ML-assisted event reconstruction might allow to achieve a highly-precise description of the deepest constituents of matter and their interactions, complementing the current developments in other areas of theoretical particle physics.
H.-P. is also funded by Sistema Nacional de Investigadores from CONACyT and PROFAPI 2022 (Universidad Autónoma de Sinaloa). P.Z. acknowledges support from the Deutsche Forschungs-gemeinschaft (DFG, German Research Foundation) -Research Unit FOR 2926, grant number 409651613.
A Coefficients for the Linear Method
For completeness, we present the coefficients associated to the linear regression for each of the three bases studied in Subsec. 4.2. We restrict our attention to the fit of the data sets at NLO QCD + LO QED accuracy, since the LO contributions were perfectly in agreement with the analytical LO formulae. In Tab. 4 we present the coefficients of the most general basis, Eq. (4.19), that reproduce the plots in the upper row of Fig. 13. The parameters of the physically-motivated basis, given by Eq. (4.19) with the constraints of Eqs. (4.20)-(4.21), are in Tabs. 5 and 6 for x and z, respectively. The corresponding correlation with the real MC variables can be seen in the middle row of Fig. 13. Finally, the coefficients for the LO-inspired basis, associated to the constraints in Eqs. (4.22)-(4.23), can be found in Tab. 7. These fall short in the quality of the fit, as we can appreciate from the lower row of Fig. 13.
B Comparison of different NN architectures
We summarize here some results that were obtained before the optimal architecture described in Subsec. 4.4 was found. In Tab. 3 we present the parameters corresponding to three different tests implemented.
Parameters
TEST 1 TEST 2 TEST 3 # hidden layers Table 3. Architectures for the MLP of three different tests for the reconstruction of the momentum fractions at NLO in QCD. All parameters are taken to be the same for X REC and Z REC .
In TEST1 (upper row of Fig. 16), we use a lower number of neurons/layer and less layers than for obtaining the results in Fig. 15. We find a poor agreement between the real and reconstructed quantities, in particular for low-z bins. An improvement is achieved by increasing the number of layers and neurons/layer (TEST2), while simultaneously requiring the NN to see no variation of the cost function (within a given tolerance) through a larger number of iterations. As seen in Fig. 16 (middle row), this gives a better reconstruction, thought it is still far from ideal. A third example, TEST3, reinforces the conditions for convergence and returns a significantly improved result (lower row of Fig. 16). Each step towards a more complex architecture and more stringent requirements for convergence is Figure 16. Comparison of the momentum fractions X REAL vs. X REC (left) and Z REAL vs. Z REC (right) obtained with MPL at NLO QCD + LO QED accuracy. The parameters for TEST1 (upper row), TEST2 (middle row) and TEST3 (lower row) are given in Tab. 3. translated into an increase of the computational time required for the training. These, and other trials, have guided us to the selection of the best architecture for our task, summarised in Tab. 2. | 13,167 | 2021-12-09T00:00:00.000 | [
"Physics",
"Computer Science"
] |
Probing quantum chaos in multipartite systems
Understanding the emergence of quantum chaos in multipartite systems is challenging in the presence of interactions. We show that the contribution of the subsystems to the global behavior can be revealed by probing the full counting statistics of the local, total, and interaction energies. As in the spectral form factor, signatures of quantum chaos in the time domain dictate a dip-ramp-plateau structure in the characteristic function, i.e., the Fourier transform of the eigenvalue distribution. With this approach, we explore the fate of chaos in interacting subsystems that are locally maximally chaotic. Global quantum chaos can be suppressed at strong coupling, as illustrated with coupled copies of random-matrix Hamiltonians and of the Sachdev-Ye-Kitaev model. Our method is amenable to experimental implementation using single-qubit interferometry.
A prominent signature of quantum chaos is the repulsion among energy levels. For instance, the spacing between nearest-neighbor levels follows the Wigner-Dyson distribution in quantum chaotic systems, while it is described by Poisson statistics in the presence of conserved quantities (e.g., in integrable systems) [38]. The SFF is proportional to |Z(β + it)/Z(β)| 2 , where Z(·) is the partition function and β = 1/k B T . This quantity probes the level statistics of both close and far-separated energy eigenvalues, providing a tool to detect the ergodic nature of the system [1]. For a generic chaotic system, the SFF exhibits a dip-ramp-plateau structure [see e.g., Fig. 1]. Its short-time decay forms a slope. The physical origin of the subsequent ramp is the long-range repulsion between energy levels [11]. The transition from the slope to the ramp forms the dip. The final plateau originates *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>from the finite Hilbert space dimension and approaches a constant value Z(2β)/Z(β) 2 in the absence of degeneracies in the energy spectrum. The SFF has been widely employed in the study of the discrete energy spectrum of quantum chaotic systems [11,[39][40][41][42][43][44][45]. Quantum chaotic systems composed of multipartite subsystems subject to generic interactions typically have a complicated energy spectrum [45][46][47][48]. We shall focus on a global subsystem composed of strongly-chaotic subsystems, interacting with each other. In this setting, any subsystem can be seen as an open quantum system embedded in an environment, composed of the remaining subsystems. The subsystem dynamics is thus governed by dissipative quantum chaos [1], which is currently under exhaustive study [29,30,33,34,37,[49][50][51][52][53]. We shall depart from the standard practice of assuming an effective open quantum dynamics, as ubiquitously done in the literature. Instead, we will account for the exact unitary dynamics of the global composite system, with no approximations (e.g., without invoking the Markovian description or an effective master equation). The above diagnostics can be employed to detect global quantum chaos in multipartite systems [48]. However, apart from proposals like the fidelity-based SFF [34,37] and the related partial SFF [45], they are not suited to directly detect how chaotic behavior stems from the subsystems and their interactions. In this work, we provide an experimentally realizable approach to this end by considering the measurement of an energy observable X, which can be the Hamiltonian of a subsystem or the interaction energy. As measurement outcomes are stochastic, we propose to study the full counting statistics, characterized by the eigenvalue distribution of X at thermal equilibrium. Its Fourier transform, the characteristic function, reveals chaotic behavior through the dip-ramp-plateau structure. Its analysis shows that strong interactions among the different subsystems can suppress the global chaotic behavior of the multipartite system, even when the subsystems are maximally chaotic, as revealed by the study of global and local observables. This scheme not only provides a convenient theoretical tool to diagnose quantum chaos in complex multipartite quantum systems, but it can be experimentally realized by using single-qubit interferometry with an ancillary qubit. Our paper is organized as follows. In Sec. II, we introduce the characteristic function of an energy observable, which is then used as a tool to detect the chaotic features stemming from the subsystems and their interactions in Sec. III. Then, we employ this method to analyze the chaotic behavior in multipartite systems sampled from the Gaussian orthogonal ensemble (GOE) in Random Matrix Theory (RMT) [2,54] in Sec. IV and the coupled Sachdev-Ye-Kitaev (cSYK) models [55][56][57][58][59][60][61][62][63][64] in Sec. V. Finally, we summarize in Sec. VI with concluding remarks and a brief discussion of potential applications.
II. THE CHARACTERISTIC FUNCTION OF THE ENERGY OBSERVABLE X
Let H = l H l be the Hilbert space of a multipartite system and X be an energy observable of a local subsystem in the subspace k H k ⊆ H. We focus on the Hamiltonian of the subsystems (local energy) and the interaction energy, as choices of the observable X. The probability distribution of the observable X with eigenvalues {x} averaged over an initial thermal equilibrium state ρ th = e −βH /Z(β) is [65] P (x) = tr [ρ th δ(X − x)] . (1) The eigenvalue probability distribution P (x) encodes the full counting statistics of the observable X, that is, the probability to find the system in an eigenstate with eigenvalue x when prepared in the state ρ th . In terms of the integral representation of the Dirac delta function, the probability distribution can be expressed as the Fourier transform of the characteristic (moment generating) function that captures the statistical properties of the spectrum of the observable X. While in the following we refer to t as a time variable, it is to be understood as the Fourier conjugate to x. Experimentally, the characteristic function Eq.
III. A PROBE FOR QUANTUM CHAOS IN MULTIPARTITE SYSTEMS
In what follows, we consider the absolute square value of the generating function in Eq. (3), i.e., as a tool for probing quantum chaos in multipartite systems, identifying contributions from the subsystems and their interactions. The choice X = H j represents the local energy that can be employed to diagnose the chaotic behavior contributed by the jth subsystem in an N -partite system. By contrast, the observable X = H j ⊗ H k ⊗ H l can be used to detect signature of quantum chaos attributed to the interactions among subsystems j, k, and l.
with no obvious dip-ramp-plateau structure plays an important role in attenuating the chaoticity.
can be interpreted as the fidelity between the initial coherent Gibbs state (or a thermofield double state) and the state resulting from its evolution [34,44]. In addition, when X is a small perturbation of the Hamiltonian H, i.e., H = H 0 + X, and commutes with H (or H 0 ), Eq.
which captures the overlap between two identical initial states (|ψ 0 ) evolving under slightly different Hamiltonians H and H 0 [17]. Note that according to the Baker-Campbell-Hausdorff formula, Eq. (5) and the LE differ in the general case, when [X, H] = 0. We emphasize that the observable X in Eq. (5) is not required to represent a small perturbation or to commute with H. When it describes the local energy of a subsystem k H k ⊆ H, Eq. (5) provides the possibility to directly detect the chaotic behavior contributed by the subsystems or interactions to the global multipartite system. It can be used to either diagnose quantum chaos of a one-partite system (as done by the SFF) or a structured multipartite system. To support this observation, we illustrate its use in the following examples involving coupled random-matrix Hamiltonians and the coupled SYK model.
IV. PROBING THE CHAOTICITY IN COUPLED RANDOM-MATRIX HAMILTONIANS
Consider a N −partite system with a general Hamiltonian of the form For the sake of illustration, let us sample the Hamiltonians from the Gaussian orthogonal ensemble (GOE) [2, 54], which is a paradigmatic random matrix ensemble for physical applications involving systems with timereversal symmetry and exhibiting quantum chaos. GOE is the ensemble of real symmetric matrices, whose elements are chosen at random from a Gaussian distribution. The joint probability density of H j ∈ GOE(d j ) (d j denotes the dimension of the Hilbert space H j ) is proportional to exp(− 1 2σ 2 trH 2 j ), where σ is the standard deviation of the random matrix elements of H j .
The first example we consider is N = 1 and X = H. In this scenario, we focus on H ∈ GOE(d) and H ∈ GOE(d 1 )⊗GOE(d 2 ) (d 1 d 2 = d) as an example. Equation (5) averaged over the full GOEs [i.e., GOE(d)] yields where · represents the ensemble average and . = denotes the annealing approximation [11]. In Eq. (7), the GOE averaged partition function is given by (see Appendix A) where I n (·) is the modified Bessel function of first kind and order n, and the coefficient reads As shown by the red dotted curve (or the dark blue curve by numerical simulations) in Fig. 1, Eq. (7) exhibits a typical feature of quantum chaos, namely a dipramp-plateau structure. The early decay from unit value comes to an end, forming a dip (also known as correlation hole) with the onset of a ramp. The latter extends until it saturates at a plateau value at the characteristic plateau time t plateau = 2 √ d/σ [see Eq. (9)]. The existence of the ramp, a period of linear growth of G(t) GOE , is a consequence of the repulsion between long-range energy levels [11]. This long-range repulsion causes the energy levels to be anticorrelated. The plateau stems from the discrete energy spectrum, whose height is Z(2β) GOE / Z(β) 2 GOE . Similar chaotic features have been studied in the Gaussian Unitary Ensemble (GUE) [34,44,80], Gaussian ensembles under infinite temperature [81], and Sachdev-Ye-Kitaev (SYK) models [11,82,83].
For systems with less chaoticity than the full GOE, the span of the ramp will shrink or even disappear (see light blue curves in Fig. 1).
Without loss of generality, we then consider a bipartite system with H (0,1) 1,2 independently sampled from GOEs. As the total system is composed of two partitions each described by a random-matrix Hamiltonian, it is not surprising that in the absence of (or weak) interactions the full system exhibits visible dip-ramp-plateau structure when choosing X = H [see the dark blue curve in Fig. 2 (a)]. However, when the coupling strength 1 between subsystems 1 and 2 is enhanced, the dip-ramp-plateau structure gradually washes out [light blue curves in Fig. 2 (a)].
To account for this phenomenon, we look at the characteristic function for different choices of the observable X = H (0) 1 in Fig. 2 (b) and X = H Fig. 2 (c) and show how these choices identify the contributions to quantum chaos by the first subsystem and the interactions between subsystems 1 and 2. Obviously, H (1) plays an important role in diminishing the chaotic behavior, since the characteristic function reflects no obvious ramp structure. Indeed, from the perspective of the nearest-neighbor level distribution, the Kronecker product of random matrices will tend to break the Wigner-Dyson distribution under certain conditions [84]. When the coupling is enhanced, the interaction term H (1) dominates, and the chaoticity of the whole system is gradually suppressed.
Similar phenomena exist in more structured systems, as shown in Fig. 3 (2) 1,2,3 and omit the superscript therein. Both bipartite interactions (e.g., H 1 ⊗ H 2 ), and the tripartite interaction H 1 ⊗ H 2 ⊗ H 3 play an import role in decreasing the chaoticity of the composite global system, while the local chaotic nature of each subsystem can be detected by choosing a local observable (e.g. X = H 1 ) even at strong coupling.
It is worthwhile to note that the presence of interactions could also induce chaos if the interaction tend to mix the subsystems, just as shown in the coupled kicked rotors [46].
V. COUPLED SACHDEV-YE-KITAEV MODEL
The second example we consider is the coupled Sachdev-Ye-Kitaev (cSYK) model [55][56][57][58][59][60][61][62][63][64]. A system composed of 2N Majorana fermions is divided into two separated sides and each subsystem is described by the SYK model [85,86]. We consider the left and right SYK Hamiltonians H L,R with a bilinear coupling H b . The total Hamiltonian of the cSYK model reads where µ controls the strength of the the bilinear coupling and χ k denote Majorana fermion operators satisfying the anticommutation relations {χ k , χ l } = δ kl . J klmn and K kk are random coupling constants independently sampled from Gaussian distributions with zero expectation values and J 2 klmn = 3!J 2 N 3 , K 2 kk = K 2 N 2 . A similar model has been used to study the holographic duality of an eternal traversable wormhole: By preparing the SYK model in a thermofield double state and turning on the coupling between the two sides, the wormhole is traversable in the context of gravity [55]. Figure 4 shows the numerical result for the disorderaveraged G(t) , in agreement with the random matrix theory. From Fig. 4(b), we see that the bilinear coupling plays an important role in controlling the chaotic behavior of the whole system, as the generating function has no obvious dip-ramp-plateau structure. The chaotic character of the system is robust when the bilinear coupling is weak (µ 0.1). When enhancing the coupling, the dipramp-plateau structure of the entire system is gradually washed out [ Fig. 4(a)]. This implies that the entire system becomes less chaotic when the bilinear coupling is strong, even when the chaoticity of the subsystems remain consistent. As in the GOE example, the chaotic nature of each subsystem can be detected by choosing a local observable (e.g. X = H L ), as shown in Fig. 4(b).
VI. DISCUSSION AND CONCLUSION
We have introduced a protocol to directly detect quantum chaos in interacting multipartite systems by measuring the statistical distribution of an energy observable X at thermal equilibrium. Specifically, we make use of the absolute square value G(t) of the generating function of the eigenvalue distribution associated with the observable X. When the observable equals the total Hamiltonian, G(t) reduces to the SFF. For local observables, chaotic features give rise to a dip-ramp-plateau structure in G(t), which is similar to that in SFF. G(t) directly detects the contributions to quantum chaos in a composite system from different subsystems by choosing the observable for k-partite interactions. We have shown that the coupling of chaotic systems can give rise to the suppression of quantum chaos in the composite system, as the interaction strength among the subsystems is increased. From the perspective of decoherence, sampling the eigenvalue statistics of a subsystem is like sampling the local energy. Quantum chaos is generally expected to be suppressed as a result of decoherence [34,87]; see however [37]. In addition, even at strong coupling, the chaotic character of the subsystems can be unveiled by choosing X as a local observable, as demonstrated by considering the multipartite GOEs and the coupled SYK models.
Our scheme can be implemented in quantum devices, such as NMR systems [69,72,74] and trapped ions [75] by introducing an auxiliary qubit coupled to the systems, as both the random spin and SYK models are realizable in the laboratory [88][89][90][91][92][93][94]. Our approach for diagnosing the chaos in multipartite quantum systems may thus find broad applications in interdisciplinary studies in quantum information, quantum matter, and AdS/CFT duality, especially in analyzing quantum chaos in structured quantum many-body systems.
In Appendix A, we intend to briefly introduce the calculation of the generating function G(t) averaged over ensembles. The averaged G(t) in terms of annealing approximation is given by [34,44] This annealing average is in agreement with the quenched average |Z (β + it)| 2 /Z(β) 2 in high temperature region [11,34]. Then the denominator and numerator of Eq. (A1) can be written as and where ρ(E) is the spectral density and ρ(E, E ) is the two-point probability density function.
Gaussian orthogonal ensemble statistics
For GOE, the spectral density and two-point probability density function are given by [34] ρ and (A5) respectively, and K d (x, y) is the kernel [2]. Note that GOE and GUE share the same form of N -point probability density function. The only difference is that the kernel K d (x, y) in Eqs. (A4) and (A5) for GOE is a quaternion. Note that σ is selected as σ = 1/ √ 2 in Ref.
According to Dyson's theorem [2], and (A8) The above method can be straightforwardly extended to higher point correlation functions. Similar work has been done in Ref. [81] for evaluating spectral form factors under infinite temperature.
G(t) averaged by GOEs
With Eq. (A7), the partition function [Eq. (A2)] averaged over the GOEs is approximated by where I n (·) is the modified Bessel function of first kind and order n (Eq. (8) in the main text). According to Eqs. (A8), the imaginary time partition function [Eq. (A3)] averaged over the GOEs reads In the main text, we consider fixed number of realizations and temperature for illustration. In this section, we append Fig. 5 and Fig. 6 to illustrate G(t) with the dependence of the number of realizations and temperature. Fig. 2(a). Data for G(t) is averaged over 1, 10, and 100 realizations of GOE. | 4,011 | 2021-11-24T00:00:00.000 | [
"Physics"
] |
Accurate measurement of a fission chamber efficiency using the prompt fission neutron method
Fission Chambers (FC) are often used to determine fission cross sections and to measure the neutron beam flux via standard neutron-induced fission reactions. Thus, the fission detection efficiency is a key parameter. Several methods exist to determine this efficiency, with a final accuracy not better than 1%. The detection of prompt fission neutrons allows events related to the fission process to be tagged, and enables the efficiency to be inferred with accuracy of the order of few 0.1%. This method is very robust since it is independent in first order to several factors like geometry, used materials or neutron contour selection. To obtain high accuracy, few corrections have still to be taken into account. In particular, the neutron detectors have to cover several detection angles. In addition, the background contribution of neutrons from cosmic rays or from an accelerator has to be removed. Several experiments based on the use of a 252Cf source are presented to describe all these points.
Introduction
Ionization chambers using fissile material (called Fission Chambers (FC)) are very simple and versatile devices [1] used in several fields in nuclear physics, applications in nuclear industry or nuclear energy research [2][3][4].In the field of nuclear data measurements, FCs are used to measure fission cross sections [5,6] and as neutron flux monitors [7,8].They are also used to tag fission events in order to study fission-related phenomena [9,10] or to reject fission events to study hardly observable phenomena [11].
FCs generally have a very high efficiency, only limited by the self-absorption of the Fission Fragments (FF) in the sample when they are emitted at very large angles with respect to the target normal (∼ 90 • ).For many applications, the knowledge of the efficiency is a key parameter to obtain accurate results.
Moreover, when one is interested in a weak phenomenon whose signature is hidden by the fission process (for instance the radiative capture), the clean subtraction of fission events is crucial.In particular, the fission events undetected by the FC have to be estimated and also subtracted.As the statistics of the investigated process is weak when compared to the one of the fission process (due to differences in cross sections or secondary particle emission multiplicities), the uncertainty on the result is proportional to the efficiency uncertainty but the proportionality coefficient may be quite large.For radiative capture cross section of a fissile isotope, this coefficient ranges from 5 to 30 in the thermal or epithermal energy range.In such cases, the very accurate knowledge of the FC efficiency is of paramount importance.a e-mail<EMAIL_ADDRESS>
Usual methods
There are different ways to measure the FC efficiency, depending on the spontaneous-fission half-life of the sample, the knowledge of its fission cross section or the detector used.
The simplest case is when the nucleus of interest fissions spontaneously (like 252 Cf).Then the fission rate depends only on the amount of material, which is easily obtained via alpha spectrometry.The main uncertainty sources of this method are the solid angle of the alpha spectrometer and the dead-time correction of the fission measurement.For certain nuclei, the spontaneous fission yield (y SF ) may also be a significant source of uncertainty (for instance y SF of 240 Pu has an uncertainty of 3.5% [12]).For 252 Cf, an uncertainty of 1% for the fission efficiency can be achieved.
If the nucleus does not fission spontaneously another solution is to use a Frisch-gridded FC.This device allows one to estimate the emission angle of the FF.If the fissionfragment angular distribution is known, the missing FF at high angle can be inferred [13].The uncertainty of this method can be as low as 1% [14].
In the cases where no Frisch-grid is used, it is much more complex to measure the fission efficiency.It can be calibrated thanks to a reference Fission Chamber (e.g. 235U(n,f)) or Ionisation Chamber (e.g. 10 B(n,α) or 6 Li(n,t). . .).It then depends on many parameters (α-spectrometry efficiency, decay constant, fission cross section. . . ) for the studied and reference material, as well as the dead time correction and the efficiency of the reference fission or ionization chamber.In the end, the accuracy of the FC efficiency is not better than few percents with this method.
The fission detection efficiency can also be estimated by analytic calculations, taking into account the thickness of the deposit and the FF anisotropy [15].The uncertainty of this method is again of the order of 1% [16].
Simulation can also be performed with Monte Carlo codes.Nowadays, such simulations do not have the required quality to be able to give an accurate value of the fission efficiency.
Prompt fission neutron method
A much simpler and more accurate method relies on using the prompt neutrons emitted by the FFs [17].A very convenient way of detecting fission neutrons is with scintillators.In particular, C 6 D 6 allow neutrons and γ-rays to be disentangled via Pulse-Shape Discrimination (PSD).These discriminated neutrons can then be detected in coincidence with a FF, and the FC efficiency ε FC is then given by the simple equation: Where: N coinc-n&FF is the number of neutrons detected in coincidence with a FF, and N single-n is the total number of detected neutrons.Equation ( 1) remains valid, no matter how many neutrons where emitted by the fission process, or how many neutrons were detected at the same time by other scintillators.In addition, as FC efficiency is close to 100%, the two terms of Eq. ( 1) are strongly correlated, greatly reducing the statistical error.
Robustness of the method
Due to the ratio expressed in Eq. ( 1), this method is very simple (no dead time correction needed, since dead time affects both terms in the same way), very stable in time and very robust against a lot of experimental parameters, such as the neutron contour used, the detector accurate location, or the materials used.In order to investigate this method, we carried out measurements with a 252 Cf source placed in a parallel-plate fission chamber and surrounded by several C 6 D 6 detectors located at 90 • with respect to the target normal.Figure 1 presents the FC efficiency measured using two C 6 D 6 detectors, which remained in the same configuration during all the test campaign.As can be seen, the measured efficiency is very stable over a period of more than 10 days: the results obtained with each detector are constant within 0.05%, and the discrepancy between the two sets of data is of the order of 0.05%.
In addition, a configuration has been tested with a 5 cm-thick lead brick placed between the FC and a C 6 D 6 detector.Once corrected from the background (see Sect. 4), the measured efficiency is in agreement with other measurements, with a high statistical uncertainty.This proves that materials located between the FC and the neutron detectors, even large thicknesses and high-Z materials, do not introduce any bias in the method.
The method relies on the selection of neutrons in the PSD spectrum.Nevertheless the chosen neutron contour has nearly no impact on the measured efficiency, as can be seen in Figs. 2 and 3. Figure 2 presents several different neutron contours used, including two small contours selecting only very small parts of the detected neutrons.Figure 3 shows the measured efficiency for each of these contours.The results obtained are consistent whatever the contour used.Even the small contours give a correct efficiency, in spite of an increased statistical uncertainty.The efficiency can be measured using any fraction of the detected neutrons.Thus, the prompt fission neutron method can be used even if the neutron-gamma discrimination is not very good, as long as the neutron contour contains only neutrons (see Sect. 4).
Influence of the neutron detector position
In first order, the method does not require the knowledge of the angle, distance and dimension of the neutron detectors.These assumptions are not true in second order, although this is mainly ignored when this method is used [17].Especially, a bias can be observed as a function of the neutron detector angle, as shown in Fig. 4. For this experiment, several measurements were carried out changing the angle of two C 6 D 6 detectors.This means that the measured FC depends on the position of the neutron FC only coinc with 0°c oinc with 45°c oinc with 90°F igure 5. FC pulse-height spectra measured without coincidence (FC only, thick line), or in coincidence with neutrons detected in the C 6 D 6 detectors located at various angles (thin lines).The alpha peak is present at low energy on the "FC only" spectrum.
detectors.This evolution can be understood by looking at the FC pulse-height spectra obtained in coincidence with the prompt neutron (Fig. 5).
The spectrum "FC only" has the standard shape of narrow parallel FCs: the peak due to alpha particles is visible at low energy (below channel 200), whereas the two FF peaks are barely visible around channel 1000 because of the very poor energy resolution.A FF emitted at 0 • with respect to the target normal loses only a limited amount of energy in the gas before hitting the opposite electrode (step increase of the spectrum near channel 300).As the FF emission angle increases, so does the energy loss in the gas (broad peak until channel 1500).For a FF emitted at grazing angle, a variable part of its energy may be lost in the sample itself, and the energy deposited in the gas can be as low as 0. Thus, the threshold used to cut the alpha contribution also removes some FFs emitted at grazing angles.
Figure 5 also shows that the FF spectrum obtained in coincidence with a neutron detector depends on the angle of this detector: the spectrum is more peaked (more FF emitted at 0 • ) when in coincidence with detectors at 0 • , and more flat (more FF emitted at high angle) when in coincidence with detectors at 90 • .This is due to a kinematic effect of the FF velocity on the neutron momentum.This effect has also been confirmed by dedicated simulations, taking into account the emission of neutrons by the moving FF.As can be seen on the spectra, the FF part below the threshold is lower when in coincidence with a detector at 0 • .
Because of this bias, none of the points of Fig. 4 represents the real efficiency of the chamber.The real efficiency can be inferred by integrating the measured efficiency over all the cosine bins.The aim of this procedure is to mimic the result of a 4π detector surrounding the FC, and not to take into account the fission anisotropy.Indeed, the results already include this effect and no additional procedure is needed for that.
We obtain here a real efficiency of (98.6 ± 0.1)%.This result confirms the excellent performance of the prompt fission neutron method to infer a FC efficiency, at least for a thin 252 Cf source.
The good accuracy of this result is partly due to the fact that the efficiency spread is lower than 1% between measurements at 0 • and 90 • .This will not be the case if the FC threshold is high (for instance at channel 500).Then, efficiency spread up to 30% between measurements at 0 • and 90 • has been observed.Under such condition, the average value cannot be accurate anymore.This implies that the prompt fission neutron method cannot be used if the FC threshold is high, i.e for highly radioactive samples where the α peak overlaps significantly with the FF peak.
Influence of parasitic neutrons
This method is also quite sensitive to parasitic events in the neutron contour.These parasitic events may be cosmic neutrons, fast neutrons coming from the beam, or wrongly discriminated γ-rays.As such events are not related to a fission process in the FC, they will reduce the measured efficiency according to Eq. ( 1).An experiment was carried out by placing the neutron detectors at different distances from the FC.The prompt fission neutrons are greatly reduced contrary to the background neutrons.As can be seen in Fig. 6, the measured efficiency drops strongly for large distances, for measurement at 0 • or at 90 • .A correction can be applied to reduce this issue.By requesting a γ-ray detection in coincidence with the prompt fission neutron, one will greatly suppress the parasitic events.Equation (1) then becomes: As can be seen in Fig. 6, the "corrected" efficiency becomes more or less independent of the neutron detector distance.Nevertheless, this solution cannot overcome a too high amount of parasitic events.It implies that this method cannot be applied if: the signal-to-background ratio is too low, fast neutrons (produced by an accelerator) can be directly detected by the scintillator, a very intense γ-ray flux prevents completely the discrimination between neutrons and γ-rays.
Conclusions
The detection of prompt fission neutrons in coincidence or not with a Fission Fragment in the Fission Chamber enables an accurate value of the fission detection efficiency to be obtained.It has been shown that this method is very robust, since it is independent in first order to several factors like geometry, used materials or neutron contour.
Nevertheless, a small angular dependency has to be taken into account by detecting prompt neutrons at different angles with respect to the target normal.If the neutron background can be neglected, an efficiency accuracy of the order of 0.1% can be obtained for a thin source like 252 Cf.If the neutron background is not negligible, an additional coincidence with a fission γ-ray may be implemented, at the price of lower statistics.
Figure 1 .
Figure 1.Time evolution of the measured efficiency of a 252 Cf fission chamber obtained with prompt neutrons detected in two C 6 D 6 detectors.The efficiency measured in a configuration with a lead brick is indicated by the full black circle at t ∼10 days.The scale is chosen for comparison purpose with Figs. 3 and 4.
Figure 2 .
Figure 2. Pulse Shape Discrimination spectrum of neutron and γ-rays detected in C 6 D 6 detectors.Different neutron contours are shown: (#1) a very large one, (#2) same as #1 without the lowerleft corner, (#3) same as #2 without the low energy part, (#4) same as #3 without the lower half of the neutron banana, (#5 and #6) two very small contours.
Figure 3 .Figure 4 .
Figure 3. Measured efficiency for the neutron contour indicated in Fig. 2. The scale is chosen for comparison purpose with Figs. 1 and 4.
Figure 6 .
Figure 6.Measured efficiency with C 6 D 6 #1 placed at different distances either at 0 • or at 90 • .Open symbols are for corrected data.Lines are here to guide the eyes.
This work is supported by the European Commission within the 7th Framework Program through CHANDA (Project No. 605203). | 3,441.2 | 2017-09-01T00:00:00.000 | [
"Physics"
] |
Exchange energies with forces in density-functional theory
We propose exchanging the energy functionals in ground-state DFT with physically equivalent exact force expressions as a new promising route towards approximations to the exchange-correlation potential and energy. In analogy to the usual energy-based procedure, we split the force difference between the interacting and auxiliary Kohn-Sham system into a Hartree, an exchange, and a correlation force. The corresponding scalar potential is obtained by solving a Poisson equation, while an additional transverse part of the force yields a vector potential. These vector potentials obey an exact constraint between the exchange and correlation contribution and can further be related to the atomic-shell structure. Numerically, the force-based local-exchange potential and the corresponding exchange energy compare well with the numerically more involved optimized effective-potential method. Overall, the force-based method has several benefits when compared to the usual energy-based approach and opens a route towards numerically inexpensive non-local and (in the time-dependent case) non-adiabatic approximations.
I. INTRODUCTION
It is with great pleasure that we provide this contribution to the special issue of the Journal of Chemical Physics honoring John Perdew and his work in quantum chemistry.John Perdew has made groundbreaking advancements in developing exchange-correlation functionals within density-functional theory (DFT), which are crucial for the accurate description of the interactions between electrons.DFT 1 , with its many variants [2][3][4][5][6][7][8][9] , is nowadays the workhorse of first-principle simulations in quantum chemistry, solid state physics and materials science, and John Perdew greatly contributed to this success story.His work has focused on improving the accuracy and efficiency of density-functional calculations by proposing more precise and robust functionals, allowing researchers to study a wide range of chemical and physical properties of materials.Specifically, he has explored fundamental (exact) constraints that an exchange-correlation functional must satisfy to accurately describe the electronic interactions in a system.Enforcing such exact constraints can greatly improve the reliability of approximate functionals 10 .This work follows this general idea by taking up previous suggestions on how to rephrase the exchange-correlation potential in terms of forces.We give known and novel exact force-based constraints and show how to translate these ideas into an efficient numerical scheme.
Most DFT simulations are performed using the Kohn-Sham (KS) scheme 11 , where the density of the interacting system is predicted by solving an auxiliary non-interacting system.It is precisely the mentioned exchange-correlation potential that relates the interacting and the non-interacting system through the underlying density-potential mapping v(r) ↔ ρ(r).For a recent review on this mapping in the context of DFT, we point to Penz et al. 12 .It is common practice to derive approximations for the (in general unknown) exchange-correlation potena) Electronic address<EMAIL_ADDRESS>by re-expressing the universal density functional as a sum of non-interacting kinetic, Hartree and exchange-energy functionals, as well as the unknown correlation-energy functional 13 and then to assume functional differentiability 14 with respect to the density.While for approximate functionals that are given explicitly in terms of the density, potentials can be determined this way by direct differentiation, for implicit functionals this is no longer possible in general 15 .To make matters worse, it has been shown that the universal density functional of DFT is not functionally differentiable with respect to the usual function spaces 16 .While a generalized definition of functional differentiability (subdifferentiability) is enough to establish the mapping from v-representable densities to potentials 17 , many of the commonly employed rules of differential calculus, such as linearity or the chain rule, might no longer hold in the same way 18 .This fact therefore questions this common way to infer exchange-correlation potentials from exchange-correlation energy functionals.We note that the theoretical setting of an exact regularization procedure is available that renders the involved functionals differentiable [19][20][21] and that this surprisingly links to the Zhao-Morrison-Parr method for mapping ρ(r) → v(r) 22 .Importantly, in this work we highlight that also the exchange-only energy is non-differentiable with respect to densities, thus allowing local-exchange potentials only in the form of generalized constructions such as the optimized effective potential (OEP), or, alternatively, leading to an additional vector potential for exchange effects.This vector potential naturally appears in a force-based approach and acts semi-locally on the wave function.This is in contrast to the exchange term in Hartree-Fock that acts fully non-locally on the wave functions.
From a physical point of view, one can always exchange the description of a many-particle quantum system in its ground state in terms of energies by a description based on forces.Both views have been viable routes towards getting the desired potential.Indeed, the exact exchange-correlation potential of DFT can be expressed directly in terms of the difference in force densities between the interacting and the auxiliary non-interacting system [23][24][25][26] , thus bypassing functional differentiation and related issues.A method for deriving DFT potentials from the electric field due to the Fermi-Coulomb-hole charge distribution was pioneered by Harbola and coworkers [27][28][29][30] .However, this approach misses the kinetic-correlation contribution and thus does not retrieve the full exchange-correlation potential 31 .This issue was also noted by Holas and March 23 , who first used a force-based approach to give an expression for the exchange-correlation potential of DFT in the form of a (path-independent) line integral.Building upon this important work, Sahni 32 was able to extend the method of Harbola.In this work, we show that a force-based approach is not only conceptually very appealing but also practically relevant.In doing so, we stick to a fully spin-resolved, collinear formulation.Specifically, we show that besides the usual Hartree potential, we can also derive the simple explicit form of the local-exchange potential previously suggested by Harbola and Sahni 27 .This potential we show to be directly linked to the exchange force density and it enters a generalized exchange virial relation.A different form of a generalized exchange virial relation is actually discussed in another paper of this special edition. 33We further find a relation between the exchange and the correlation force densities that takes the form of a novel exact constraint.As we demonstrate, the formulation of the force-based local-exchange potential is consistent with current-density-functional theory (CDFT) 6,34 and we discuss its connection to the time-dependent case.In the context of ground-state DFT we then show that the explicit force-based local-exchange potential performs similarly to the numerically much more involved optimized effective potential (OEP) approach in exchange approximation.We show that the difference between OEP and force-based local-exchange potential can be connected to the above mentioned exact constraint that exchange and correlation force densities need to fulfill.We finally comment on practical ways on how to treat the remaining correlation force densities.In this we highlight how the force-based approach provides a route towards numerically inexpensive non-local (in how it depends on the density) and non-adiabatic functionals that also act semi-locally on the wave function if they contain a vector-potential contribution.
II. FORCE-BASED KOHN-SHAM SETTING
To start with, we consider the N -particle Hamiltonian (in Hartree atomic units e = ℏ = m e = (4πϵ 0 ) −1 = 1), first in a time-dependent setting while we later switch to ground states.
Here, v(rσ, t) is the external, spin-resolved one-particle potential at time t.Note that while the external potential can act separately on the spin components, we here do not take an external magnetic field nor spin-orbit coupling into account.We at the end comment how to extend the present force-based formalism to these cases as well.For anti-symmetric wave functions Ψ(x 1 , ..., x N , t), where x k = (r k σ k ), we define the spin-resolved p th -order reduced density matrix (2) We can then use these reduced density matrices and the spinresolved density ρ(x, t) = ρ(rσ, t) = ρ (1) (rσ, rσ, t) to express the (paramagnetic and spin-resolved) current density j(rσ, t) = Im ∇ρ (1) (rσ, r ′ σ, t) and its equation of motion 26,35 (also called "local force-balance equation") This expression introduces the exact interaction-stress and momentum-stress force densities, respectively, Here, (∇|r ′ − r| −1 ) indicates that the gradient only acts on the Coulomb interaction term.Those force terms can be linked directly to the quantum stress tensor 36 that includes information about the atomic shell structure 37 .Equation (4) has been the primary starting point for inquiries in time-dependent DFT (TDDFT).Among other things, it was used to provide a mapping from densities to potentials 38 , to analyze features of the time-dependent exchange-correlation potential 39 , to get exact constraints as well as formulations for non-adiabatic approximate functionals 40,41 , and to reformulate KS-TDDFT in terms of the second time derivative of the density 42 .While here we focus on the ground-state problem, some consequences for the time-dependent case will be discussed further in Sec.V.
In the following, we indicate the terms coming from the solution Ψ of the fully interacting problem as The auxiliary, non-interacting KS problem is controlled by the Hamiltonian Ĥs = T + V [v s ], including a different external potential v s (rσ, t), and has a Slater-determinant solution Φ. Analogous to Eq. (4) we then have for the auxiliary system ∂ t j s (rσ, t) = −ρ s (rσ, t)∇v s (rσ, t) + F T [Φ](rσ, t), (7) with a different current density j s .
We now assume that all potentials are time-independent, that we are in the ground state for both systems, and further that they both generate the same ground-state density, i.e., ρ(rσ) = ρ s (rσ).In the ground state it also holds ∂ t j(rσ) = ∂ t j s (rσ) = 0 and we find with the definition of the Hartree exchangecorrelation (Hxc) potential v Hxc (rσ which defines F Hxc for each spin channel.By virtue of the Hohenberg-Kohn theorem 12,43 and assuming non-degeneracy of the ground states for simplicity, the Slater determinant Φ as well as the the interacting wave function Ψ are given solely and uniquely in terms of the density, which makes all the force densities determined by the density only.Equation (8) implies that is a purely longitudinal (conservative) vector field.Since the Hartree contribution is longitudinal as well, so is the remaining exchange-correlation part.But if we decide to split the exchange-correlation part into its exchange and correlation contributions, as it is typically done for the energy, we do not have any such knowledge about these individual contributions any more.So the exchange and correlation vector fields can and will contain a non-zero transverse component.Now, we can recast Eq. ( 9) into a Poisson equation ∇ 2 v Hxc = −∇ • f Hxc by applying the divergence and solve for for v Hxc using the corresponding Green's function for the spatial domain R 3 , Equation (10) represents the direct link between Hxc force density and the corresponding potential.Unlike the link between the energy and the potential, no functional differentiability is involved here.Next, we split up the Hxc force density in analogy to the partition of the energy usual in DFT as , where F W [Φ] is the Hartree-exchange (Hx) force density and F c [Φ, Ψ] the correlation force density.If desirable, the correlation part can be split again into a kinetic-correlation contribution The partition of Eq. ( 11) leads to the respective force-based potentials, v fHx and v fc , that add up to the exact Hxc potential, Here, we have denoted Since the Hx force density is given in terms of the KS wave function only, we know this part explicitly and we can in principle calculate the exact force-based Hx potential for a given KS wave function.
To make the resulting force-based Hx potential more explicit, we make use of the fact that Φ is a single, closed-shell Slater determinant with spin-space orbitals φ k (rσ).We can then express 44 ρ (2) s (rσ, where ρ (1) Therefore the Hx force density splits naturally into a Hartree and an exchange term, ) Note that while the Hartree mean-field acts on both spin channels, the exchange force density only links to the same spin component.If Φ would be the Slater determinant from a non-local Hartree-Fock calculation then these terms would be the corresponding Hartree and Fock exchange force densities, respectively.From the Hartree force density F H (rσ) = −ρ(rσ)∇v H (r) we read off the (spin-summed) Hartree potential The potential from the exchange terms will be derived in Sec.IV.The exchange force density satisfies an exchange virial relation that gives the exchange energy (see App. B for details) The exchange force density can be interpreted as the force on a test particle in the electric field of the exchange hole, as detailed in Harbola and Sahni 27 .This relation provides an important link from forces, or approximations to them, back to the respective energies.
III. FORCE-BASED EXACT CONSTRAINTS
Let us now comment on some exact constraints for the force densities.If, for the sake of consistency, just a single particle with wave function φ(rσ) is considered, then it directly follows F W [φ] = 0 (no self-force) and naturally F c [φ, φ] = 0. Note that F W [φ] = 0 can also be deduced from Eq. ( 14).Also, in the one-particle case, because of Ψ = Φ = φ the kinetic-correlation and the interaction-correlation force densities must vanish independently.We further remark that these self-interaction properties are directly related to the corresponding expressions for the energy, which serve as a basis for the construction of self-interaction corrections, as pioneered by Perdew and Zunger 45 .A similar scheme could thus be developed on the basis of forces.
The zero-force and zero-torque constraints 46 in the forcebased formulation for the ground state take the simple form σ F Hxc (rσ) dr = 0, σ r×F Hxc (rσ) dr = 0. (17) They even hold for each spin channel independently with a Hamiltonian like Eq. (1) that does not feature any non-collinear magnetism.Since we have an explicit expression for the contribution F W [Φ] to the full F Hxc [Φ, Ψ] available, we can tighten these constraints further.Equation ( 14) yields an antisymmetric integrand in rσ, r ′ σ ′ in Eq. ( 17) that must be invariant under the exchange of rσ ↔ r ′ σ ′ .This means that Eq. ( 17) holds for F W [Φ] independently and thus we also receive exact constraints for just the correlation force, This property can even be independently formulated for the kinetic-correlation and the interaction-correlation force densities, as shown in Fuks et al. 40 .
A further exact constraint that holds locally for the exchange and correlation vector fields is derived at the end of Section IV.
IV. DISCUSSION OF THE FORCE-BASED LOCAL-EXCHANGE POTENTIAL
Let us now consider how to make the above relations between force densities practical for DFT applications.Using Eqs. ( 12) and ( 14), we can define the force-based localexchange potential that together with the Hartree term gives Here, we used the usual definition of the exchange-hole density 44 (without a factor 1 2 since it is spin-resolved) The potential v fx is therefore the exchange potential that originates from only the longitudinal part of the exchange vector field f x = F x [Φ]/ρ.We will come back to this point and its implications for density-functional approximations below.To complete the picture, the missing correlation potential is given uniquely in terms of the (unknown) force-density difference F c [Φ, Ψ] from Eq. ( 11) and a simple Coulomb integral (see Eq. ( 12)).For the force-based local-exchange potential given by Eq. ( 19) a numerically more convenient form in terms of the Slater-exchange potential plus correction terms can be derived (see App. A).It also obeys the usual coordinate scaling relations (see App. B).
Based on the above explicit form of the local-exchange potential, we can highlight differences to the usual energy-based approach and point out potential advantages of the force-based approach.In the energy-based approach, the potential is found via a functional variation of the energy expression with respect to the density.In the exchange case one considers the functional derivative of where v H (r)ρ(rσ) dr is the Hartree energy.Now, E x [ρ] is defined as a density functional by invoking the usual mapping ρ → Φ.As was pointed out by van Leeuwen 15 , for an implicit density functional the (generalization of the) functional derivative is not straightforward and does not exist in general.On the other hand, if the functional derivative would exist, then by construction it obeys a virial relation of the form (see App. B) In practice, the derivative is determined by the OEP approach 47,48 that needs to assume Fréchet (total functional) differentiability to allow for the application of the functional chain rule 49,50 .Yet, OEP exchange potentials, in accordance with non-differentiability of the exchange-energy functional, in general do not obey Eq. (22).Sometimes this relation is additionally imposed, e.g., in Fritsche and Yuan 51 (also compare Tab.I), however this will not restore differentiability.Consequently, the OEP procedure needs to be interpreted as a localpotential approximation, but not to an actually existing localexchange potential defined by a functional derivative.In the force-based approach, that avoids any reference to functional differentiability, a different virial relation is derived.We start by applying the Helmholtz decomposition 52 to the exchange vector field.This yields a longitudinal (curl-free) and a transverse (divergence-free) vector-field component.
With this we find the generalized exchange virial relation (see App. B for details; Harbola, Slamet, and Sahni 28 give the same relation, just without spin sum) The last term due to the curl does vanish for sphericallysymmetric densities (see App. B, where we also give an explicit formula for α fx , and Tab.I).Hence, v fx satisfies a virial relation of the form of Eq. ( 22) for closed-shell sphericallysymmetric systems, but in general we have the more involved Eq. ( 24) including a transverse component through the curl term.This is due to the fact that the exchange vector field f x is not purely longitudinal, and hence while the exchange energy is directly linked to the exchange force density, the longitudinal part of the exchange vector field alone cannot yield the full exchange energy in general.Since OEP methods do not fulfill the virial relation of the form of Eq. ( 22) even in the spherically-symmetric case (see Tab. I), this implies that the local-exchange potential from the force-balance approach is in general different from an exchange potential defined as a (generalized) exchange-energy derivative 15 , like obtained by common OEP procedures.This was already pointed out by Wang et al. 53 when discussing the local-exchange potential of Harbola and Sahni 27 , which is equivalent to Eq. ( 19).To show this, they derived the second-order gradient expansion of both, the gradient of the energy-based exchange potential and f x = F x [Φ]/ρ from Eq. ( 14), and showed that the expressions do not match.But note that in order to make this a strict statement about the potentials, we need to assume that f x is a gradient field, i.e., α fx = 0, which does not hold in general.
On the practical side, if one is only interested in finding a local potential that minimizes the exchange energy then the common OEP approaches will usually perform better than the force-based local-exchange potential (see Tab. II in App.C for comparison to Hartree-Fock results).This is by design, since the exchange-only OEP procedure is precisely such that it seeks the local potential v OEPx that minimizes the energy with an uncorrelated state 54 .Herein, the state is always a Slater determinant constructed from the orbitals of a one-particle Hamiltonian with the chosen potential v OEPx .Note, however, that due to the restriction of the OEP to local potentials, the obtained energies will be higher than the Hartree-Fock results that allow for non-local potentials.Furthermore, as it is clear from the previous discussion, the common OEP procedures applied to the exchange energy do not give the correct exchange force density.Instead they will generate a purely longitudinal vector field that is not related to the exchange force density in a direct manner.We usually loose control over the connection between the energy terms and the corresponding force densities (which leads, among others, to a violation of the virial relation).An important exception is the exchange-only local density approximation, where the connection still holds as can be shown directly.The same holds for correlation approximation.Any approximate correlation energy can always only lead to a longitudinal vector field via the corresponding (generalized) energy derivative, while an approximation based on forces will usually include a transverse component.This means that there is no strict connection between the energybased and force-based approximations.To put it differently, if we want to build approximations in DFT based on Hartree and exchange terms beyond the local-density approximation, we have to decide whether we use the exchange energy or the exchange force density.Both strategies will only agree when we use the exact exchange and correlation terms together.
In the force-based approach we find an additional exact constraint that holds locally for the transverse component of the exchange vector field and relates it directly to correlation effects.This is through the previous observation that by Eq. ( 9) the f Hxc is purely longitudinal and since the same holds by construction for the Hartree part, we must have a zero transverse contribution in f x + f c .If we now define α fc analogous to α fx then this means that at each point in space and for every spin component it must hold that To have such an exact constraint that gives direct access to some local correlation effects can be seen as an advantage of the force-based approach over the usual energy-based, global viewpoint.
Finally, let us comment on the homogeneous-density limit.In Tchenkoue et al. 26 it was demonstrated how the usual Slater Xα 55 and local-density approximation (LDA) 44 formulas for the local-exchange potential can be derived directly from the exchange-force expression Eq. ( 14).Since f x (rσ) is purely longitudinal for a homogeneous density, the exact same derivation can also be started immediately from the local-exchange potential expression of Eq. (19).A related derivation of the same fact based on the second-order gradient expansion of the exchange-hole density was already given in Wang et al. 53 .This directly connects the most fundamental functional approximations of DFT with the present formalism.
V. THE FORCE-BASED APPROACH IN OTHER DFT VARIANTS
Another advantage of the force-based approach is the inherent compatibility to CDFT and time-dependent DFT (TDDFT).The generalized exchange virial relation Eq. ( 24) highlights the connection of the force-based approach to CDFT.If besides the density ρ we also intend to control the current density j, then we would need a transverse exchangecorrelation vector potential as well, where α fx contributes to the exchange vector potential 26 .We even find that v fx and α fx can be chosen to be the local-exchange potential of CDFT and of time-dependent CDFT 26 .This makes v fx nicely compatible with this variant of DFT.
To make the connection to TDDFT visible, we derive the analogous equation to Eq. ( 8) by subtracting Eqs. ( 4) and ( 7), just this time the time-derivative of the currents is not zero.
(26) In order to still get rid of the currents, one can apply the divergence and use the continuity equation ∂ t ρ(rσ, t) = −∇ • j(rσ, t) = −∇ • j s (rσ, t) for both systems that share the density ρ(rσ, t) at all considered times.This is how we arrive at 5 ∇ • [ρ(rσ, t)∇v Hxc (rσ, t)] = −∇ • F Hxc (rσ, t). ( Consequently, the local-exchange potential in TDDFT is now determined from the exchange force density not by solving a Poisson equation but by inverting a Sturm-Liouville equation.Therefore, the local-exchange potential in TDDFT will be different from v fx , yet the difference can be determined from α fx 40 .On the other hand, if f Hxc (rσ, t) would be purely longitudinal then Eq. ( 8) also holds in the time-dependent case and then Eq. ( 27) is a direct consequence of it by just multiplying with ρ(rσ, t) and taking the divergence.Conversely, it is only the transverse part of f Hxc (rσ, t) that makes the difference when we compare instantaneously the time-dependent case of Eq. ( 27) and the static case of Eq. ( 8).In other words, if we only consider the wavfunctions/forces at a given instant, it is only the non-zero phases/transverse forces that inform us whether we are considering a time-dependent situation.While we do not have access in this instantaneous picture to all memory effects 56,57 , we nevertheless see that the transverse forces are important to generate memory over time.In an exchangeonly approximation, this role is then taken over by α fx .
Finally, let us shift attention back to the time-independent setting again.Therein, besides Eq. ( 8), the exact ground-state exchange-correlation potential and force density still also obey Eq. ( 27).This gives rise to a different version of the localexchange potential.Here we will not investigate this alternative force-based formulation further but will compare these different definitions in a forthcoming publication.It however highlights a route to more, possibly useful conditions: higher-order equations of motions bring with them new exact constraints.
VI. NUMERICAL TESTS
Finally, we consider the differences between the force-based approach and the energy-based approach in practice, with a focus on the effects of the transverse part of the exchange vector field f x expressed through the vector potential α fx .For this, we solve the KS equation in exchange approximation (FBEx), i.e., we take v fx from Eq. ( 19) and assume v fc = 0 for the total v Hxc in Eq. ( 12) in every KS iteration step, and check how this performs in comparison to common exchange approximations.In this investigation we do not yet employ the transverse part of f x somehow beneficially.Yet, involving only the longitudinal component of f x in the calculation is equivalent to considering the full f x plus the transverse part from f c since Eq. ( 25) holds as an exact constraint.A force-based approximation focusing purely on exchange effects thus would need to also consider the transverse contribution from the exchange vector field.To summarize, there are two possible viewpoints on this approximation that are equally justified: When considering only the v fx exchange potential then exchange effects from α fx are missing, or alternatively, that this procedure additionally includes correlation effects from α fc .
We have implemented the force-based local-exchange potential in the real-space code Octopus 58 and ran simulations for a set of atoms in closed-shell configurations using normconserving pseudopotentials 59 , a grid spacing of 0.15 Bohr, and a radius of 10 Bohr for Be and Ne, a radius of 12 Bohr for Mg, Ar, and Zn, and a radius of 14 Bohr for Ca.We found that the FBEx potential performs similar to the much more involved OEP in exchange approximation (OEPx) or its further approximation OEPx-KLI 60 (see Fig. 1).While the pure Slater, FBEx and OEPx-KLI potentials all share the same computational scaling as Hartree-Fock, the OEPx method only works as an iterative procedure and is more costly.Furthermore, we demonstrate that the local-exchange potential adheres to the virial relation of the form of Eq. ( 22) up to numerical inaccuracies (see Tab. I) because of spherical symmetry, while the OEPx and the OEPx-KLI violate this relation.This numerically confirms that the exchange functional is not functionally differentiable.Further numerical tests and comparisons, also for small molecules, can be found in App. C. , in mHa, between the exchange energy computed from the orbitals (or the density) and from the exchange energy obtained from the potential using the virial relation for different local-exchange potentials.
In fact, Fig. 1 shows that the FBEx and the OEPx potentials are almost identical, apart from the small "bumps" that are an indication of the shell-structure of the atoms.The Slater potential also does not capture them and a suitable correction for it based on the kinetic-energy density is available 61 .Due to the use of pseudopotentials in our simulations displayed in Fig. 1, we see here either one or no bump.To check that the differences between these two potentials are indeed only present at the shells of the atoms, we also performed all-electron calculations for Ne and Ar using a grid spacing of 0.05 Bohr and a radius of 14 Bohr and we obtain that the potentials differ only at the location of the bumps, see the top panels of Fig. 2. The lower panels of Fig. 2 show the difference between OEPx-KLI and FBEx together with ∥α fx ∥, i.e., the transverse part of f x .The same comparison is conducted with OEPx for those atoms where a bump is visible despite using pseudopotentials, see Fig. 3.In each case we find that the FBEx force has a nonvanishing transverse part only at the position of the bumps, and that the norm of α fx follows a pattern similar to the difference between the FBEx and OEPx potentials, clearly showing that there is a connection between the transverse part of the force and the bumps of the OEPx potential.In fact, we interpret our result in the following way: The bumps appear in the OEPx potential as the procedure tries to impose a longitudinal vector field at places where the exchange vector field actually has a transverse component.Thus, even if we do not employ the transverse part of the forces explicitly, they contain physical information (related to the shell structure of atoms) that can be potentially used for more advanced approximations.For instance, this feature is related to to the correlation forces f c , since α fx needs to precisely compensate α fc .In this manner, we get local information about the correlation vector field that could provide quite stringent constraints on future approximations.
FIG. 2. Top panels: Similar as Fig. 1 for all-electron calculations.Bottom panels: Difference ∆vx between vFBEx and vOEPx-KLI, and rescaled norm of α fx from the exchange force as in Eq. ( 23).The vertical ines indicate the positions of the bumps.FIG. 3. Difference ∆vx between vFBEx and vOEPx, and rescaled norm of α fx from the exchange force for some of the atoms of Fig. 1.
A further comparison of the FBEx and OEPx-KLI methods with inversion procedures that yield the full exchangecorrelation potential is performed in Appendix D.
VII. OUTLOOK AND CONCLUSIONS
Considering all the different insights obtained by this investigation, we want to highlight two specific results that we deem important for the future of force-based approximations.On the one hand, we have seen that the transverse part of the exchange vector field contains important physical information.It stands to reason that the standard OEP procedure tries to turn these transverse parts into longitudinal contributions of the corresponding OEP potential which are responsible for the appearance of the "bumps".An obvious way of including these contributions is to employ an auxiliary system that also contains a vector potential instead of the usual KS system with only a scalar potential.Using the beneficial connection of the forcebased approach to CDFT, the corresponding exchange vector field is given via a non-linear partial-differential equation 26 .This paves the way to obtain a semi-locally acting vector potential in the context of electronic ground-state DFT.
On the other hand, for time-dependent DFT, we have seen that the appearance of the transverse vector field implies nonadiabaticity.That is, if we solve the corresponding Sturm-Liouville equation ( 27) instead of the Poisson equation ( 12), we automatically get a non-adiabatic functional based on force densities.These two aspects make the force-based approach quite promising to find more accurate yet numerically inexpensive approximations within density-functional theories.It is even relatively easy to extend the present approach to other variants of DFT, for instance to forms that include noncollinear magnetism and spin-orbit coupling [62][63][64][65] .Using the corresponding equations of motions for the current density 66 , one can apply the same Hartree-exchange and correlation force density decomposition and hence is able derive the corresponding potentials also for this case.Furthermore, in order to address the still unknown correlation force density we highlight that the transverse part of the exchange vector field provides us with local constraints on approximate correlation force densities.In the correlation force density, the interaction part F W [Ψ]−F W [Φ] can be expressed by the correlation hole, while the kinetic part F T [Ψ] − F T [Φ] can be expressed as the difference between the interacting and the non-interacting onebody reduced density matrix close to the diagonal 40 .Approximations can then be tested by comparing to the transverse exchange vector field.From this perspective, the success of LDAbased approximations can be explained by the fact that already on the exchange level no transverse forces appear, such that the virial relation is fulfilled, and hence adding purely longitudinal correlations obeys the zero-transverse vector field constraint of Eq. ( 9).Alternatively, one can start from approximated correlated reduced density matrices and derive the corresponding forces.One can therefore either try to build approximate models based on physical intuition 67 , derive expressions for these terms for specific cases (e.g., the homogeneous limit) from wave-function methods potentially augmented by modern machine-learning techniques 68 , or devise perturbative expansions on top of the KS Slater determinants.Even though the force densities are three-dimensional vector fields and thus more involved than energy expressions, the previously successful application of the aforementioned approaches to construct correlation-energy functionals makes it plausible that similar methods are well applicable to the force-based approach to KS-DFT.
In conclusion, we have shown that defining the Hx potential and energy of KS-DFT by forces is not only conceptually beneficial, but also has certain advantages in practice over the common energy-based approach.It is numerically straightforward to construct the corresponding potential from a given force density, the method allows to avoid various problems of the energy-based approach such as determining implicit functional derivatives, and it further provides an explicit form for the local-exchange potential and exchange energy from the exchange force density.This force-based local (in the sense on how it acts on the wave function) exchange approximation depends non-locally on all other points and all occupied orbitals and is numerically as cheap as the Slater potential.The nonexplicit correlation potential is defined uniquely by the correlation force density and in contrast to the energy-based approach, the role of correlations in compensating the transverse part of the exchange vector field is transparent.It is seen that the exchange vector field provides local information about the properties of the correlation vector field.We also have a straightforward connection to the current-density variant of DFT and to the time-dependent case.Furthermore, the approach can be seamlessly applied to atomic, molecular and solid-state systems.We showed numerically that the well-known bumps of the OEPx potential are connected to the transverse exchange vector field and with this also to the correlation vector field due to the exact constraint that the transverse exchange vector field is exactly compensated by the corresponding correlation effects.We think, following the ideas of John Perdew and others, that such local exact constraints are a good starting point to help in devising correlation force density approximations, in DFT and its variants.
where the vector calculus identities ∇(∇ • A) = ∆A + ∇ × (∇ × A) and ∇ × f (r)C = (∇f (r)) × C were used.Now the first part gives exactly E x according to Eq. (B3) while the second line appears as an additional term in a virial relation between E x and v fx .But since it appears as the curl of a vector expression it cannot be equal to the gradient of a scalar potential, so the difference comes from the transverse part of f x while v fx corresponds only to the longitudinal part of f x .The nice thing is that this gives an explicit form for the transverse part of f x , while the longitudinal part is already given by −∇v fx .We thus find the following Helmholtz decomposition, If in certain situations it holds that the second term above is zero then the virial relation between E x [Φ] and v fx holds in the form of Eq. (B2).We show that for spherically-symmetric densities ρ(rσ) = R σ (|r|) this is indeed the case.For this we take the last integral of Eq. (B9) and perform integration by parts with the curl and vanishing boundary terms to get (B10) But now ∇ × r = 0 and (∇ρ(rσ)) × r = (r × r)R ′ σ (|r|)/|r| = 0, so the above expression evaluates as zero. | 8,493.8 | 2022-03-31T00:00:00.000 | [
"Physics"
] |
Preoperative prediction of lymph node metastasis using deep learning-based features
Lymph node involvement increases the risk of breast cancer recurrence. An accurate non-invasive assessment of nodal involvement is valuable in cancer staging, surgical risk, and cost savings. Radiomics has been proposed to pre-operatively predict sentinel lymph node (SLN) status; however, radiomic models are known to be sensitive to acquisition parameters. The purpose of this study was to develop a prediction model for preoperative prediction of SLN metastasis using deep learning-based (DLB) features and compare its predictive performance to state-of-the-art radiomics. Specifically, this study aimed to compare the generalizability of radiomics vs DLB features in an independent test set with dissimilar resolution. Dynamic contrast-enhancement images from 198 patients (67 positive SLNs) were used in this study. Of these subjects, 163 had an in-plane resolution of 0.7 × 0.7 mm2, which were randomly divided into a training set (approximately 67%) and a validation set (approximately 33%). The remaining 35 subjects with a different in-plane resolution (0.78 × 0.78 mm2) were treated as independent testing set for generalizability. Two methods were employed: (1) conventional radiomics (CR), and (2) DLB features which replaced hand-curated features with pre-trained VGG-16 features. The threshold determined using the training set was applied to the independent validation and testing dataset. Same feature reduction, feature selection, model creation procedures were used for both approaches. In the validation set (same resolution as training), the DLB model outperformed the CR model (accuracy 83% vs 80%). Furthermore, in the independent testing set of the dissimilar resolution, the DLB model performed markedly better than the CR model (accuracy 77% vs 71%). The predictive performance of the DLB model outperformed the CR model for this task. More interestingly, these improvements were seen particularly in the independent testing set of dissimilar resolution. This could indicate that DLB features can ultimately result in a more generalizable model. Supplementary Information The online version contains supplementary material available at 10.1186/s42492-022-00104-5.
Introduction
Breast cancer increases in stage and severity as it metastasizes to axillary lymph nodes [1]. Lymph node involvement increases the risk of recurrence and acts as a prognostic indicator, with the survival rate of node-positive patients being up to 40% lower than node-negative patients [2][3][4][5][6]. As a result, lymph node status is critical for diagnosis, prognosis, and monitoring of treatments [7].
Although lymph node management has become less invasive with the use of sentinel lymph node (SLN) biopsy as opposed to full axillary lymph node dissection, significant side effects including shoulder dysfunction, lymphedema, and nerve damage were still observed in as much as one-fourth of patients [8,9]. Moreover, studies have reported > 70% of biopsied SLNs are negative [8], indicating that such procedure is unbeneficial and potentially harmful to a significant amount of breast cancer patients. Accurate non-invasive assessment of nodal involvement therefore is valuable in cancer staging, surgical risk, and financial cost reduction.
Breast cancer is an area of peaked interest for the combination of radiomics and artificial intelligence, with clinical impact possible as both a diagnostic and prognostic tool [10]. One such task is the development of a predictive model for non-invasive staging of the axillary lymph nodes as an alternative to SLN biopsy. Nomograms and radiomic pipelines have been used to predict SLN status with promising results [9,[11][12][13][14][15][16][17][18][19]. However, conventional radiomics (CR) has several disadvantages. For instance, the robustness of the conventional handcrafted radiomic features is variable based on changing parameters, including pixel size, region-of-interest (ROI) delineation, and signal-to-noise ratio [20]. Deep learning has the potential to serve as a more powerful tool to overcome these issues as shown in several studies [21][22][23][24][25][26]. Moreover, deep learning is capable of learning high-level and task-adaptive image features [27]. It enables direct feature extraction from multiple levels without explicit definition and can provide a higher level of feature abstraction [28]. However, deep learning requires a large training data size to obtain a generalizable and functional classification model. Fortunately, studies have demonstrated that initial features extracted by deep learning network are largely similar to CR, since they both detect edges, ripples, and various other textures prior to observing more complex features [29][30][31]. Thus, it is possible to use features identified by a pre-trained deep learning network as an alternative to hand-crafted features used in CR.
The purpose of this study was to develop a DLB feature prediction model for preoperative prediction of SLN metastasis and compare its predictive performance to state-of-the-art CR. Specifically, this study aimed to compare the generalizability of CR vs DLB features in an independent testing set of dissimilar resolution. Figure 1 shows the general pipeline used in this work.
Study population
The dataset used in this study is an expansion of that described in previous publication [13]. Briefly, data for this institutional review board-approved retrospective study collected images from June 2013 to June 2017. Inclusion criteria were patients that had (1) preoperative dynamic contrast enhanced (DCE)-magnetic resonance imaging (MRI), (2) diagnosis of invasive breast cancer by histopathology, (3) SLN biopsy result, and (4) no neoadjuvant chemotherapy. Exclusion criteria were patients that had (1) no SLN biopsy result, (2) very small tumor ROI (less than 64 voxels), or (3) MRI after neoadjuvant chemotherapy. After inclusion/exclusion criteria, a sample of 198 patients (67 positive SLNs and 131 negative SLNs) was used in this study. Of those 198 subjects, 163 had an in-plane resolution of 0.7 × 0.7 mm 2 ; that 163 subject cohort was randomly divided into two independent subsets: a training set (approximately 67%, 109 patients with 37 positive SLNs) and a validation set (approximately 33%, 54 patients with 18 positive SLNs).
The remaining 35 subjects (35 patients with 12 positive SLNs) with a different in-plane resolution (0.78 × 0.78 mm 2 ) were treated as an independent testing set with dissimilar resolution to test the generalizability of Fig. 1 Schematic representation of pipeline for feature extraction, reduction, and model creation. The CR pipeline and the pipeline using deep learning-based (DLB) features are only different in their feature extraction step. All other steps remain identical. LASSO: Least absolute shrinkage selection operator; ROC: Receiver operating characteristic the predictive models for imaging data acquired with slightly different resolution. Given that radiomics has been shown to have limited generalizability, an independent testing set of dissimilar resolution will more rigorously assess this potential of the predictive model.
Clinical data collected for this study included whether the tumor was confined to the upper inner quadrant, multifocality, age, pathological type, tumor grade, molecular subtype, and lymphovascular invasion.
MRI examination
The MRI examinations were all performed using a dedicated 8-channel breast coil on 1.5 T GE Signa (GE Healthcare, Wauwatosa). The sequence of interest in this study was the DCE series; sagittal VI-BRANT multiphase sequence was acquired with the following parameters: repetition time (TR) = 4.46-7.80 ms; echo time (TE) = 1.54-4.20 ms; flip angle = 10°; matrix = 256 × 256; slice thickness = 2 mm. I.V. contrast agent was Magnevist (Schering, Berlin), injected at a dose of 0.2 mL/kg at a rate of 2 mL/s, followed by 20 mL saline flush. Five phases were acquired: one pre-contrast and four post-contrast images. Patients with pixel sizes of 0.7 × 0.7 mm 2 were split into training and validation cohorts. Patients with pixel sizes of 0.78 × 0.78 mm 2 were separately analyzed in an independent testing set. This analysis allows for the clinically practical reality that it is knowingly difficult to standardize pixel size, which may need to be adjusted based patient specific variable (e.g., size of the patient).
Map calculation
Reducing the effect of varying TR and TE, three ratio maps were used: wash-in maps ((S 1 -S 0 )/S 0 ) × 100%, wash-out maps ((S 1 -S 4 )/S 1 )) × 100%, and signal enhancement ratio (SER) maps ((S 1 -S 0 )/(S 4 -S 0 )) × 100%, where S 0 , S 1 , and S 4 are the pre-contrast, first postcontrast, and fourth (the last) post-contrast images, respectively. These maps are independent of the original MR signal intensity and capture the behavior of contrast enhancement in the tissue. Representative image and calculated kinetic maps are shown in Fig. 2.
Segmentation
ROIs of the tumor were manually drawn on the first post-contrast image by a radiologist with 11 years of experience. We noted that although manually drawn ROIs can be subjective, an automated convolutional neural network (CNN)-based segmentation was shown to be comparable in radiomics task-based assessment within this cohort [32]. The original ROI was dilated by 4 mm using Matlab v2017b (MathWorks, Natick). This resulted in two regions of interest: one intratumoral ROI and one peritumoral region (0-4 mm). These regions are shown in Fig. 3.
DLB features
VGG-16 [36], a pre-trained CNN architecture that is 16 layers deep, was utilized for DLB feature extraction. A schematic representation of the network is shown in Fig. 4. The image was multiplied by the binary mask of either the intratumoral ROI or peritumoral ROI such that the regions outside of the RIOs were set to zero. Absolute resampling, similar to above, was performed (for intratumoral ROIs, wash-in map: 0 ∼ 640%; wash-out map: − 156 ∼ 100%; SER map: − 1280 ∼ 1280%; for peritumoral ROIs, wash-in map: 0 ∼ 640%, wash-out map: − 540 ∼ 100%, SER map: − 1280 ∼ 1280%). Data were then normalized on a scale from 0 to 1, with 0 being the lowest value and 1 being the highest value referenced above. Then, the resultant image was multiplied by 255 to match the 0-255 range expected by VGG-16.
VGG-16 has a predefined input structure of 224 × 224 × 3. Each 2D slice of our dataset was cropped to 224 × 224. A 3D volume comprised of the Wash-In, Wash-Out, and SER maps for each slice was inputted.
Matlab was used to import VGG-16. The model was not retrained; instead, all layers remained frozen (weights remained the same), and only activations from the last fully connected layer (fc8) were extracted. These were exported as rows, in which each 2D slice had a single row of 1000 features. Given that each ROI has multiple slices, the value in each column was averaged across all slices for that subject.
Feature reduction
From this point forward, the pipelines for the CR and DLB features are separate and identical.
Standard z-score normalization was used on the training set; z-score is value minus training set mean divided by training set standard deviation. The validation and testing sets were also normalized using the training set mean and standard deviation. Training set was rebalanced using an adaptive synthetic sampling approach; this improves class balance by the creation of new samples from the minority group [37]. Given the high dimensionality of all the extracted features, several steps were performed to remove redundant or non-informative features. Firstly, Mann-Whitney U-test was used to find significantly different features between SLN positive and SLN negative groups; a range of p-value thresholds were tested (0.001, 0.005, 0.01, 0.05). Secondly, groups of highly correlated radiomic features were identified (Spearman ρ) and only one representative feature was selected from each correlated group; similarly, several ρ-value thresholds were tested (0.75, 0.80, 0.85, 0.90, 0.95). Finally, an optional step of principal component analysis (PCA) was performed to further reduce the feature space; several number of PCA components were tested (20,40,60,80,100). The optimal thresholds for feature reduction were chosen as those that resulted in the highest validation accuracy of the average of 100 random seeds.
Feature selection and model creation
The remaining features from the reduction process combined with the clinical features were the input for feature selection process. A logistic regression model was used for the prediction task. The selection of important predictors was performed in the training set using the Least Absolute Shrinkage Selection Operator Regression (LASSO) [38] with 3fold cross-validation. The selected model was that of minimum cross-validation error plus one standard deviation. To avoid overfitting, the maximum number of the selected features was restricted to 10. These features were then used to establish logistic regression models to predict SLN metastasis. The optimal threshold of the receiver operating characteristic analysis was determined by maximizing the Youden index (YI) in the training set, where the YI is defined as sensitivity + specificity -1. This threshold was applied to the independent validation and testing datasets. Predictive performance measures tabulated included area under the curve (AUC), sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV), and accuracy. To avoid the model optimization becoming stuck in a local minimum, the LASSO procedure was repeated 100 times with different seeds. The cross-validation results across all folds were averaged; the model that achieved the highest accuracy in the training set was selected as the prediction model. Additionally, the training set was shuffled each iteration to randomize the cross-validation within the training set, while the independent validation and testing set remained the same.
Model incorporating peritumoral region
The primary analysis for this study was the model incorporating intratumoral plus peritumoral (4 mm) features, given that it has been shown to outperform intratumoral features alone in a previous publication [13]. For CR model, a total of 157 features (146 radiomic and 11 clinical) were included in the 3-fold crossvalidation LASSO feature selection process. The optimal feature reduction parameters were a ρ-value threshold of 0.95, a p-value threshold of 0.05, and no PCA. We noted that PCA was also performed for the CR feature reduction pipeline but did not improve the predictive performance of the model. Eight features, including 1 clinical, 2 shape, and 5 texture features, were optimized for this model (Table 1).
For DLB model, the optimal feature reduction parameters were a ρ-value threshold of 0.85, a p-value threshold of 0.001, and a PCA value of 80. After feature reduction, there were 91 features (80 DLB and 11 clinical) inputted into the feature selection process. For this model, 5 features were finally selected, including 2 clinical (tumor grade and lymphovascular invasion) and 3 DLB features.
Predictive performance metrics are shown in Table 2 . It is noted that we included the performance of the training set for the completeness of the paper; however, it should not be used for comparison due to overfitting concerns.
Model excluding peritumoral region
As a secondary analysis, models created utilizing clinical features and intratumoral features alone were analyzed.
For CR model, a total of 104 features (93 radiomic and 11 clinical) were fed into the 3-fold cross-validation LASSO feature selection process. The optimal feature reduction parameters were a ρ-value threshold of 0.95, a p-value threshold of 0.05, and no PCA. In logistic model creation, a maximum of 10 features were included. As discussed in Methods, the random seed with the highest training set accuracy was reported. Finally, there were 10 features chosen, including 2 clinical features, 2 shape features, and 6 texture features (Table 3).
For DLB model, the optimal feature reduction parameters were a ρ-value threshold of 0.75, a p-value threshold of 0.005, and no PCA. After feature reduction, there were 48 features (37 DLB features, 11 clinical) inputted into the 3-fold cross-validation LASSO feature selection process. For this model, 9 features, including 2 clinical features (Tumor grade and Lymphovascular Invasion) and 7 DLB features, were selected.
Predictive performance metrics are shown in Table 4 and Fig. 6 for the CR and DLB models. In the validation set, the DLB pipeline performed similarly compared to the CR pipeline
Discussion and conclusions
The results of our study showed that the predictive performance of the DLB model outperformed the CR model in several metrics. More interestingly, these improvements were seen particularly in the independent testing set with dissimilar resolution. This could indicate that DLB features are less sensitive to varying conditions (e.g., pixel size changes) and ultimately result in a more generalizable model. SLN status prediction has been explored using nomograms; examples include those developed by Memorial Sloan Kettering Cancer Center and MD Anderson, which include age, tumor characteristics (size, grade, type, focality, location), lymphovascular invasion and hormone receptors to predict the likelihood of a diseased, positive SLN. These nomograms have been shown to have a moderate predictive performance [11,12]. Imaging studies have investigated the use of non-invasive quantitative imaging radiomic biomarkers along with clinical data for prediction of SLN status, with promising results (AUC > 0.8) [9,[13][14][15][16][17][18][19]. Specifically, our previously published work has validated a radiomic pipeline of peritumoral region in combination with intratumoral region for the prediction of SLN metastasis [13]. The peritumoral region is of interest because tissue surrounding the tumor may contain valuable information such as angiogenic-lymphangiogenic factors and tumor infiltrating lymphocytes, which have been shown to be related to treatment response [39]. Predictive models based on CR features can be disadvantageous because radiomic features have been shown to be sensitive to changing parameters, such as pixel size alteration [20]. Numerous studies have shown DLB models to outperform CR pipelines. Using MRI, CNN performance has been compared and shown to outperform radiomics for the purposes of breast lesion classification [23] and gene mutations in low grade gliomas [24]. Additionally, the combination of the CR and DLB features was superior in the survival and classification prediction of high-grade gliomas [25,26]. Our study took one step further, creating an independent testing set of dissimilar resolution, in attempt to identify a particular condition in which DLB features may outperform CR features. Note that the goal of this work is to compare DLB and CR features. Also, given that the radiomic model has shown to outperform the model using clinical characteristics alone Table 2 Predictive performance results for intratumoral plus 4 mm peritumoral region for CR and DLB models. Values shown are from the random seed with highest training set accuracy. The DLB pipeline slightly outperformed CR in the validation set of the same resolution as the training set. A larger improvement is seen in the testing set of dissimilar resolution. This indicates the DLB pipeline might be more generalizable and less sensitive to pixel size differences in previous study [13], the models presented here are not compared with the clinical only model. Deep learning can utilize CNNs to extract features by applying convolution layers composed of small-sized fields or kernels, followed by pooling layers to reduce the size of the resultant feature space. Optimizing the kernel to be applied to the image to extract meaningful features is the purpose of training the network. Consequently, the performance of a CNN is dependent on the amount of data it has available to train on. This results in the CNN being more 'data hungry' and typically needing a very large sample size, on the order of millions of images, to have optimal performance [40]. Even if collaborating with multiple institutions, acquiring millions of medical images is frequently not feasible, especially in the instances of rare diseases. One way to address this limitation is the utilization of transfer learning. Transfer learning involves the implementation of a pre-trained network that is further fine-tuned with samples specific to a desired task. Transfer learning has been explored for prediction of lymph node metastasis in patients with cervical cancer using MRI (AUC: 0.9) and breast cancer using CT (AUC: 0.8) with promising results [41,42]. In this work, we used a pretrained VGG model that was originally optimized to classify 1000 types of objects. Thus, the features extracted from the VGG model are expected to be robust as the features were used to differentiate similar shaped objects such as tow trucks and cars, and animals such as ladybugs and crabs. Transfer learning models that further train VGG with new medical imaging data can further manipulate the way features are extracted; whereas our proposed method doesn't rely on refining the original VGG network and directly utilizes the robust features that have been trained in its original task. Our proposed model uses the last fully connected layer in VGG prior to classification, allowing our DCE images to fully propagate throughout the network for feature extraction. This approach is more readily available compared to transferred learning approach and can be directly incorporated into more traditional radiomics pipeline. Particularly that for studies with relatively small sample size, it is expected to be less prone to overfitting.
Future directions of this study include fine-tuning of the network directly for classification instead of feature extraction [40]. Additionally, incorporation of additional features such as CoLlAGe and wavelet features could be performed for the radiomics pipeline; specifically, it has been shown that these directional-based features have correlated with tumor microenvironment as seen on pathology, such as orientation and densely packed tumor infiltrating lymphocytes [39,43]. Moreover, other evaluations using different parameters other than in-plane resolution could be performed to further evaluate the generalizability. In general, there also exists the possibility to look specifically at the lymph nodes. This study analyzed the in-breast tumor to predict metastasis to the lymph node. To date, efforts to analyze axillary lymph nodes on MRI has been limited by small axillary lymph node sizes, breast coil sensitivity in regions in the axilla, and exclusion of a majority of the axilla from field of view. Although nodal morphological features on MRI are predictive of malignancy [2,44], predictive value of contrast enhancement and morphological criteria, such as size and shape of lymph nodes, was found to be controversial [45,46]. Furthermore, application of this model to more advanced stage breast cancer (e.g., patients undergoing neoadjuvant chemotherapy) or other cancer types (e.g., head and neck cancer) is another potential avenue of study.
Additional file 1: Supplemental Table 1. Summary of radiomic features extracted. Fig. 6 Predictive performance of features including intratumoral region with CR and DLB models. The DLB model performed similarly to CR in the validation set. In the testing set, the DLB model outperformed CR in numerous metrics, including NPV, accuracy and YI. This indicates the DLB model might be more generalizable and less sensitive to pixel size differences. Note that the performance of the training set is not used for comparison due to overfitting concerns. AUC: area under the curve; Sens: sensitivity; Spec: specificity; PPV: positive predictive value; NPV: negative predictive value; Acc: accuracy; YI: Youden index | 5,147.8 | 2022-03-07T00:00:00.000 | [
"Computer Science"
] |
Lentinan and β-glucan extract from shiitake mushroom, Lentinula edodes, alleviate acute LPS-induced hematological changes in mice
Objective(s): Immunomodulatory activity of β-glucans of shiitake mushroom (Lentinula edodes) has been known. We investigated whether β-glucans from L. edodes would attenuate the acute effects of lipopolysaccharides (LPS) on peripheral hematological parameters in mice. Materials and Methods: An in-house β-glucans extract (BG) prepared from fruiting bodies of shiitake mushroom L. edodes was chemically measured and characterized using spectrophotometry and HPLC. Male BALB/c mice directly inhaled aerosolized LPS of 3 mg/ml and were treated with BG or commercial β-glucan (known as lentinan; LNT) (10 mg/kg bw) at 1 hr before or 6 hr after LPS inhalation. The blood samples were collected by cardiac puncture from euthanized mice at 16 hr post-treatment. Results: The results showed a significant reduction in levels of blood parameters, including red blood cells (RBC), hemoglobin (HGB), hematocrit (HCT), and platelets (PLT); and a significant increase in blood lymphocyte counts in LPS-treated mice as compared with the control mice (P≤0.05). Total white blood cells, neutrophils, and monocyte counts did not show any significant difference among the groups. Treatment of LPS-challenged mice with LNT or BG significantly increased the levels of RBC, HGB, HCT, and PLT; and reduced blood lymphocyte counts as compared with LPS-treated mice (P≤0.05). Conclusion: These findings suggest that β-glucans from L. edodes might be effective in attenuating the effects of inhaled LPS on peripheral blood parameters. Thus, these findings might be useful in acute inflammatory diseases particularly pulmonary infectious diseases in which the hematological parameters would be affected.
Introduction
Lipopolysaccharide (LPS) also known as endotoxin is the most abundant major component within the gram-negative bacteria cell walls which is involved in the activation of the CD14/TLR4 receptor on the monocytes and macrophages and regulation of the inflammatory cytokines (1). Such inductive effect of LPS has led to the use of this compound in animal models of diseases such as sepsis, mastitis, enteritis, acute lung injury (ALI), and various cancer types (2)(3)(4)(5)(6). Administration of LPS in rodent animals is usually carried out using two main routes, including injection (intravenous or intraperitoneal) and pulmonary delivery of LPS (3,7). The latter can be modeled through either tracheal instillation or inhalation where the alveolar epithelium is the primary damaged structure (8). Pulmonary delivery of LPS may lead to ALI (9), causing infiltration of excessive neutrophils into the lung tissues followed by the release of pro-inflammatory cytokines of endothelial and epithelial lung injury (10). LPS-simulated ALI also increased the content of white blood cells in the lung (11). However, very little is known regarding the effects of pulmonary delivery of LPS on hematological parameters of blood in a murine model. β-glucans are abundant in the cell wall of mushrooms and consist of long or short-chain polymers of glucose subunits with β-1,3 and β-1,6 linkages that are responsible for the linear and branching structures, respectively (12). The most studied health-related effects of mushroom β-glucans include their ability to involve in modulating the immune system (12). Lentinan, a β-glucan with a 1,3 linkage derived from shiitake mushroom (Lentinula edodes) has been widely studied mainly for its immunomodulatory activity. Lentinan triggers signaling pathways, such as MAPK, NF-κB, and Syk-PKC, via binding pattern recognition receptors (TLRs, Dectin-1) and the complement receptor type 3 (CR3, also known as CD11b/CD18) on the membrane of various immune cells particularly natural killers, macrophages, and T cells (13)(14)(15). Such effects have also been demonstrated for polysaccharide fractions and extracts containing β-glucans isolated from shiitake mushrooms (14,16,17).
A number of previous studies have confirmed that lentinan or β-glucans extracts from shiitake mushroom can β-Glucan relieves LPS-caused hematological changes Jafari et al. attenuate in vitro LPS-induced pro-inflammatory cytokines and oxidative stress (17), and improves in vivo LPS-induced mastitis (6), tumor growth (18), and lethality in mice (19). However, no animal study has been undertaken to investigate the alleviative effects of lentinan or β-glucans extracts obtained from shiitake mushrooms in response to LPS delivered through the pulmonary route. Also, very little is known about how lentinan or β-glucans extracts neutralize or ameliorate LPS-induced damages on blood parameters in a murine model. Consequently, the present study aimed to evaluate whether or not a commercial lentinan and an in-house β-glucans extract from shiitake mushroom would attenuate the negative effects of LPS inhaled into the lung on hematological parameters in mice.
Production of in-house polysaccharide extract
Fruiting bodies of shiitake mushroom, L. edodes (strain no. M3102, the Mycelia company, Deinze, Belgium) were prepared in the mushroom laboratory of Industrial Fungi Biotechnology Research Department, Academic Center for Education, Culture, and Research (ACECR)-Mashhad, Iran. Mature fruiting bodies were harvested and immediately stored in a -80 °C freezer for 48 hr before being subjected to quick-freezing with liquid nitrogen according to a protocol described previously (20). The frozen samples were transferred to a freeze drier (Ferdowsi University's central laboratory, Mashhad, Iran) under 0.1 mbar for 72 hr until a fine powder was obtained. The lyophilized samples were stored at -20 °C until use. Polysaccharide extract (PE) of the lyophilized shiitake mushroom was produced based on a hot water method (21,22). Distilled water with a ratio of 1 to 10 was added to 100 g of the powder and autoclaved for 30 min at 120 °C under the pressure of 1 bar. After cooling, the solution was centrifuged at 10,000 rpm for 10 min. The supernatant was mixed with 96% ethanol and placed at 4 °C for 24 hr. The extract was centrifuged at 10,000 rpm for 10 min. The resultant precipitate considered PE was subjected to freeze-drying as described above.
Chemical analysis of β-glucan
Quantification of the β-glucan present in the lyophilized PE was performed at 510 nm using a spectrophotometer based on the manufacturer's instructions of a special mushroom and yeast β-glucan assay kit (cat. No. K-YBGL, Megazyme International, Wicklow, Ireland). In addition, High-performance liquid chromatography (HPLC) analysis of β-glucan was performed using a single four-solvent pump HPLC Model 1100 of Agilent (USA) with a RID detector (Pars Knowledge Pars Company, Isfahan, Iran). The C18 column had a particle size of 4.6 µm in 250 mm and a half-centimeter ODS column. The mobile phase of 100% deionized water with a flow rate of 1 ml/min was isocratically applied with a refractive index detector at ambient temperature. Commercial lentinan was also subjected to HPLC analysis using the same method as described for PE containing β-glucan. The amount of β-glucan in the PE was calculated using the line equation obtained from drawing the standard curve of lentinan (y=156608x-7150.7).
Animals
Male Balb/C mice (aged 7 weeks, weighing 20 g on average) were provided by the animal housing laboratory of Mashhad University of Medical Sciences (Mashhad, Iran). The experimental protocol for all mice was approved by the Animal Ethics Committee of the Iranian Academic Center for Education, Culture, and Research, Mashhad (code no. IR.ACECR.JDM.REC.1399.010). The animals were kept in a controlled environment with free access to food and water. All mice used in this research were treated humanely according to institutional guidelines for animal welfare, with due consideration to the alleviation of distress and discomfort.
LPS preparation and aerosol exposures
LPS was freshly reconstituted in 3 mg/ml of phosphatebuffered saline (PBS). The LPS solution was aerosolized with an air nebulizer in the animal housing laboratory of Mashhad University of Medical Sciences (Mashhad, Iran) according to a previous protocol with mice (23), with modifications following a pilot study test. Every five mice were placed in a cylindrical chamber with all output connected to the air nebulizer containing 1.5 ml of the LPS solution. Mice inhaled the aerosolized LPS for 20 min and remained inside the chamber for another 20 min while the air nebulizer was turned off. Control mice were exposed to 1.5 ml of aerosolized PBS under the same condition as described for LPS-treated mice.
Preparation of injectable treatment
Lentinan or the PE containing β-glucan (20 mg) was reconstituted in 500 μl of 20% (v/v) DMSO in PBS to make a stock of 40 mg/ml. The stock solution was diluted with PBS to reach a concentration of 2 mg/ml, while the concentration of DMSO became equal to 1% v/v. The working solution was filtered through a 0.22-micron filter and placed at 4 °C until use. Then, 100 µl of the cold working solution was used for intraperitoneal injection in each 20-g mouse. This amount of injectable lentinan or the PE containing β-glucan was equal to 10 mg/kg of mouse bw which has been shown to be safe and did not have any negative effect on the survival rate of mice in a previous study (19).
Study design
Four study groups (n=5 each) were designed as follows: • Control: Mice exposed to aerosolized PBS and euthanized after 16 hr. • Sham: Mice exposed to aerosolized LPS (dose: 3 mg/ml in PBS; 1.5 ml final volume) and euthanized after 16 hr. These mice were treated with PBS alone. • LPS LNT: Mice treated with lentinan (10 mg/kg bw) at 6 hr after (treatment) or 1 hr before (prophylaxis) exposure to aerosolized LPS (dose: 3 mg/ml in PBS; 1.5 ml final volume). Mice were euthanized 16 hr after exposure to LPS. • LPS BG: Mice treated with the PE containing β-glucan (10 mg/kg bw) at 6 hr after (treatment) or 1 hr before (prophylaxis) exposure to aerosolized LPS (dose: 3 mg/ml in PBS; 1.5 ml final volume). Mice were euthanized 16 hr after exposure to LPS.
Blood collection
Mice were deeply anesthetized by 100 µl of a mixture of 10% xylazine and 10% ketamine at the ratio of 3:7. The blood samples were obtained by cardiac puncture. In brief, the anterior abdominal wall was split and then the diaphragm muscle was removed. The beating heart was gently accessed and cardiac blood was extracted by a 22G needle from the right and left ventricles. About 500 µl of the blood was poured into EDTA-treated tubes and kept shaking to avoid agglutination until they were subjected to cytometry. A complete blood count (CBC) of the blood samples was done by a Hematology Analyzer (Siemens, ADVIA ® 2120i) in the medical central laboratory of the Iranian Academic Center for Education, Culture, and Research (ACECR), Mashhad, Iran. Table 1 shows the hematological parameters measured by the Hematology Analyzer.
Statistical analysis
Data were subjected to one-way ANOVA using SPSS software version 25. Mean values were compared by Duncan's Multiple Range Test and reported as means±standard deviation. A probability of P≤0.05 was considered to be significant. Graph prism version 9 was utilized to draw graphs.
Confirmation of β-glucans in the in-house extract
The extraction efficiency of PE was found to be 9.12% of the shiitake mushroom powder (dw). The mushroom and yeast β-glucan assay kit-based quantification of glucans showed that PE contained 27% β-glucan and 5% α-glucan. Further HPLC analysis revealed that PE produced a clear and strong peak with a retention time (RT) of 4.748 min (Figure 1a). A similar peak with an RT of 4.786 was also obtained with LNT ( Figure 1b) as a nearly pure β-glucan extracted from shiitake mushroom. According to the peak area quantification, PE contained 27.5% β-glucan (dw) which confirmed the kit-based findings.
RBC, hematocrit, and MCV contents
As depicted in Figure 2, inhalation of LPS by healthy mice significantly reduced their RBC amount compared with the control group (P≤0.05). LPS-exposed mice treated with LNT or BG displayed a significantly higher level of RBC compared with the sham group (P≤0.05). In addition, there was no statistically significant difference in RBC levels between the control and LPS-exposed mice treated with LNT, with RBC levels in each of these groups being 8.8 and 8.6 million/μl, respectively (P≥0.05). However, the RBC content in LPS-exposed mice treated with BG was significantly lower than in the control mice (P≤0.05).
The prophylactic groups of mice exhibited different results as compared with the treatment groups. Pre-treatment of mice with LNT or BG did not compensate for the decrease in RBC count caused by LPS inhalation compared with the control mice ( Figure 2). As expected from the results of the RBC count, inhalation of LPS by healthy mice significantly reduced their hematocrit levels compared with the control group (P≤0.05) (Figure 3). The hematocrit content of 40.6% in the LPS-exposed mice treated with PBS was significantly lower than that of LPS-exposed mice treated with LNT (P≤0.05). However, the hematocrit content of neither LNT nor BG groups reached that of the control group (P≤0.05). In the prophylaxis tests, no alleviative effect of LNT or BG was observed in the LPS-challenged mice as compared with the control group (Figure 3).
The average size of RBC indicated as mean corpuscular volume (MCV) decreased in LPS-exposed mice, but this reduction was not significant (P≥0.05). Although treatment of LPS-exposed mice with LNT or BG apparently increased MCV, the difference was not statistically significant at P-value less than 0.05 ( Figure 4). Meanwhile, pre-treatment of mice with BG in the prophylactic groups caused a reduction in MCV as compared with the sham group ( Figure 4).
Hemoglobin and MCHC contents
Hemoglobin (HGB) levels of healthy mice exposed to LPS inhalation were notably lower than the control group (P≤0.05). Although treatment of LPS-exposed mice with LNT increased their HGB levels, only BG significantly increased the HGB content of LPS-challenged mice from 11.18 to 12.14 g/dL (P≤0.05). However, neither LNT nor BG reaches the count of hemoglobin in the control group statistically (P≤0.05). Similar to the RBC count, Pretreatment of mice with LNT or BG did not significantly alleviate the reducing effect of LPS inhalation on mice hemoglobin levels ( Figure 5).
The mean corpuscular hemoglobin concentration (MCHC) measures the average concentration of HGB in the RBCs. The findings showed a significant increase in MCHC following LPS inhalation in mice. Treatment with LNT was able to significantly reduce this index to the control group level. In the prophylaxis group, no significant difference in the level of MCHC was observed between the sham and the treatment groups ( Figure 6).
Platelet and MPV contents
Inhalation of LPS by healthy mice considerably reduced their platelet levels compared with the control group (P≤0.05). Post-treatment or pre-treatment of LPS-exposed mice with LNT or BG significantly increased the platelet levels as compared with the sham group (P≤0.05). However, no treatment could increase this index as much as that of the control group (Figure 7). Despite changes in the number of platelets, there was no statistically significant change in the average size of platelets indicated as mean platelet volume (MPV) among the groups (Figure 8).
Cardiac WBC and differential count
Healthy mice receiving LPS by inhalation showed a significant increase in their cardiac lymphocyte counts compared with the control group (P≤0.05) ( Table 2). In contrast, lymphocytes reduced to the levels observed in the control group after treatment of LPS-exposed mice with LNT or BG (Table 2). Cardiac total WBC, neutrophils, and monocytes did not show any significant changes between the treatment groups. Pre-treatment of mice with LNT or BG decreased the counts of neutrophils and lymphocytes as compared with the sham group. However, no change in monocyte levels was observed in the prophylaxis groups ( Table 2).
Discussion
In this study, we used a murine model with direct inhalation of aerosolized LPS in order to assess its acute effects on peripheral blood parameters. Then, LPSchallenged mice were treated with a β-glucan extract directly obtained from fruiting bodies of shiitake mushrooms and a nearly purified commercially available β-glucan (known as Lentinan) derived from the same mushroom species. Overall, our findings for the first time revealed that pulmonary delivery of aerosolized LPS apparently decreased the levels of RBC, HGB, and HCT in the peripheral blood of mice. These results are in agreement with the findings obtained from intraperitoneal injection of LPS where a significant decline was observed in several peripheral blood parameters including RBC, HGB, HCT, MCHC, and PLT in mice (24). Recently it was also shown that injected LPS decreased cardiac RBC content in mice (5). In addition, RBC aggregation and deformability were also observed in 24 hr after IP injection of LPS in mice (25). Similar findings have also been obtained with rats where IP injection of LPS declined peripheral HGB (26,27).
This study also showed a decline in peripheral blood PLTs 16 hr after inhalation of LPS in mice. These findings are in perfect match with previous studies, where a rapid reduction in the number of circulating platelets occurred at 3 hr to 24 hr post-LPS injection in the peripheral blood of mice and rats (24,26,(28)(29)(30)(31)(32)(33)(34). PLTs are non-nucleate blood cells with reported roles in hemostasis and immune responses by interacting with other cells of the immune system and by secreting inflammatory mediators (35). Therefore, PLT Table 2. Effects of β-glucans from shiitake mushroom on improving total WBC and differential count in peripheral blood of mice challenged with LPS (n=5 each) Values shown are the means of five mice±standard deviation of the mean. In each row, the statistical lower-case letters including "a, b" indicate significant differences between the means at the statistical level of 5% or less (P≤0.05). Values followed by the same lower-case letters are not significantly different LNT: Lentinan; BG: Polysaccharide extract containing β-glucans β-Glucan relieves LPS-caused hematological changes Jafari et al. has been known as an important agent in inflammatory reactions, particularly in acute form. Since TLR4 is also expressed on the surface of PLTs, they possess a functional receptor for bacterial LPS-inducing inflammation (34). It has also been demonstrated that the effect of LPS on blood PLT levels depends on the time elapsed after LPS injection, so PLT depletion at early hours post-LPS injection has been shown to prevent LPS-induced rapid shock but also increase delayed lethality (34). Consistently with animalbased findings, increased mortality of septic patients has clinically been associated to reduce numbers of circulating PLTs (36). Taken together, the findings of the present study may suggest that direct inhalation of aerosolized LPS would have acute effects on peripheral blood parameters in mice in the same way as intraperitoneal or intravenous injection of LPS. This study also showed a considerable increase in total peripheral blood lymphocyte counts and no significant change in the levels of total WBC, neutrophils, and monocytes in response to pulmonary delivery of LPS in mice. These findings may contradict a previous work in which total WBC levels obtained from the blood of mice showed a marked increase at 4 hr after injection of LPS followed by a decrease during 12 hr (37). Contrarily, another study reported that the WBC count declined significantly at 3 hr after LPS injection and increased with time showing a partial normalization at 12 hr in rats and mice (31). Thus, the changes in the levels of total WBC and its differential cells in response to LPS observed in this study might be related to a specific inflammation pattern of these cells 16 hr after LPS inhalation, which could be different from the outcome of LPS injection. However, the result obtained from some studies showed that LPS is T cell-independent B cell mitogen and polyclonal activator in mice. Mitogens are substances that cause DNA synthesis, blast transformation, and ultimately the division of lymphocytes (38).
According to the current study, mice given shiitakederived β-glucans (BG or LNT) after being exposed to aerosolized LPS had significantly higher levels of RBC, HGB, and PLT and lower levels of lymphocytes in their peripheral blood. These findings might be confirmed by previous observations that bacterial BG significantly increased the WBC count, RBC count, HCT, HGB, and PLT in males fed BG than in control rats (39). LNT is an immune activator that activates macrophages and lymphocytes, increases the chemotaxis of macrophages and the toxic response of lymphocytes to Yac-1 cells and P-815 cells, and antagonizes the carcinogenicity of BBN in mice (40). To the best of our knowledge, this is the first in vivo study to report the relieving effects of mushroom-derived β-glucans in relation to the hematological effects of LPS, although its immunomodulatory effects have been well understood in vitro (14,16,17) and in vivo (4,6,18,19,41,42).
Conclusion
The evidence from this study for the first time demonstrates that β-glucans from shitake mushroom, L. edodes, may be effective in attenuating the effects of inhaled aerosolized LPS on blood parameters. These findings might be used for further research mimicking characteristics of diseases caused by aerosol or air-borne transmission of infectious microorganisms in which hematological parameters would be affected. | 4,755.8 | 2023-01-01T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Biology"
] |
A New Approach for Heterogeneity Corrections for Cs-137 Brachytherapy Sources.
Background Most of the current brachytherapy treatment planning systems (TPS) use the TG-43U1 recommendations for dosimetry in water phantom, not considering the heterogeneity effects. Objective The purpose of this study is developing a method for obtaining correction factors for heterogeneity for Cs-137 brachytherapy sources based on pre-calculated MC simulations and interpolation. Method To simulate the effect of phantom heterogeneity on dose distribution around Cs-137 sources, spherical water phantoms were simulated in which there were spherical shells of bone with different thicknesses (0.2cm to 1.8cm with 0.1cm increment) at different distances (from 0.1cm to 10cm, with 0.5cm increment) from the source center. The spherical shells with 0.1cm thickness at different distances from 0.1cm to 10cm were used as tally cells. The doses at these cells were obtained by tally types F6, *F8, and *F4.The results indicate that the percentage differences between the doses in heterogeneity sections with the dose at the same positions inside the homogeneous water phantom vary when the distance of bone section from the source center increases, because of decreasing the average energy of photons reaching the bone layer. Finally, the results of Monte Carlo simulations were used as the input data of MATLAB software, and the percentage dose difference for each new configuration (i.e. different thickness of inhomogenity at different distances from the source) was estimated using the 2D interpolation of MATLAB. Results According to the results, the algorithm used in this study, is capable of dose estimation with high accuracy. Conclusion The developed method using the results of Monte Carlo simulations and the dose interpolation can be used in treatment planning systems for heterogeneity corrections.
Introduction
T he recommendations of the AAPM task group #43 (TG43, and TG43U1) has been widely used in most treatment planning systems for dosimetry of brachytherapy sources. One of the main defects of the TG-43 recommendations is considering the homogeneous water phantom, not taking the tissue inhomogenity into account. Variety of tissues (i.e. bone, soft tissue, air cavities, and etc) with different physical and radiological properties exists inside the human body. Accurate prediction of the dose in presence of heterogeneities is a very important issue in maximize the therapeutic benefit in brachytherapy and radiation therapy. The heterogeneity corrections in radiation therapy treatment planning have been the topic of different studies [1][2][3][4][5].
The purpose of this study is developing a new fast method for the heterogeneity corrections in brachytherapy treatment planning
Monte Carlo Simulations
Today, the Monte Carlo codes have been accepted as the gold standard for medical dosimetry. MCNP4C Monte Carlo code has been widely used for dosimetry in brachytherapy [6,7,8]. In this study, MCNP4C code was used for heterogeneity correction for Cs-137 brachytherapy sources. This code is capable of simulating different photon and electron interactions. Various tallies of this code can be used for scoring the dose in the phantom.
In this study, first a spherical water phantom was simulated. The tally cells were spherical shells with thicknesses of 0.1cm. To score the dose at the tally cells, tally types F6, and *F8 (divided by the mass of the shell), and *F4 (multiplied by the mass absorption coefficient of the tissue) were used. Then the percentage differences between the results of these tallies were calculated.
In the next step, the spherical layers of bone with different thicknesses were simulated inside the phantom at different distances from the source. Finally the percentage difference between the dose in homogeneous and inhomogeneous phantom were obtained.
Method for heterogeneity correction
In this step, the results of the simulations were used as the input data in the MATLAB software. The percentage dose difference for bone layers of other thicknesses in other distances were obtained using two dimensional interpolations in MATLAB for thick bone layers, and distance based correction factors for thin bone layers.
Comparison of different tallies
The dose at different distances (0.1cm to 10cm) of water phantom were obtained using three common tally types *F4, F6, and *F8. Figure 1 shows the percentage differences between the results of the three tallies. According to the results, the percentage difference between the dose calculated by *F4 tally and the other two tallies, increase by increasing the www.jbpe.org A New Approach for Heterogeneity Corrections tally cell distance from the source. The figure show that the maximum difference between *F4 tally and the other tallies was found to be 3%. While the difference between the doses obtained by tallies *F8 and F6 are less than 1% at all points. Therefore the tally F6 was used for obtaining the dose in heterogeneous phantoms containing bone layers.
MCNP4c results for heterogeneous phantom The percentage differences between the doses in bone layers of heterogeneous phan-tom with different thicknesses (from 0.2 cm to 1.8 cm) located at 1 cm distance from the source center, and the doses at the same positions inside the homogeneous water phantom are shown in Figure 2. As it is obvious from the figure, the dose in the bone layer is less than the dose in water. This is due to the fact that in the photon energy spectrum at distance (r=1cm), the Compton interaction is more probable than photoelectric interaction; therefore the dose in the bone layer is less than the dose in water phantom. Figures 3, and 4 shows the dose variation inside the heterogeneity when the bone layers of different thicknesses are located at the distance of 5 and 7 cm from the source center. Figure 5 shows the percentage dose difference between water phantom and the phantom containing bone layers with equal thicknesses, located at different distances from the source. The figure shows that the dose in bone layer increases by increasing the distance of the bone layer from the source center. This is due to the fact that increasing the distance of the bone layer from the source center has an important effect on energy spectrum of the source, and the average energy of the photons reaching the bone layer, decreases in bone layers far from the source. The photon energy spectrum at different distances from the source (obtained by tally F8) is shown in Figure 6. The average energy of photons reaching at each layer is obtained by equation 1.
The results show that the average energies of photon reaching the far layers, decrease signif- icantly, therefore photoelectric effect begins to become dominant, so that the dose in bone layer becomes more than the dose in water.
Correction factor for thin inhomogenity
The ratio of the dose in the thin inhomogenity layer (i.e. less than 0.4cm) to the dose at the same layer in homogeneous water phantom is shown in figure 7. According to the figure, a correction factor can be introduced for dose rate constant obtained in homogeneous water phantom according to equation 2.
Where x is the distance of the inhomogenity from the source center.
The results of interpolation for thick inhomogenity
The MC simulated dose in inhomogeneous phantom with the dose in water phantom for a 2cm bone inhomogenity located at different distances was compared with those obtained by the new method (two dimensional interpolation and applying some correction factors for the points in the boundaries). The results indicate that the maximum error of the new method for dose estimation in heterogeneity layer is found to be less than 0.4%.
Conclusion
A new method for inhomogenity correction for thick and thin layers of inhomogenities was proposed in this study. Different thicknesses of inhomogenities were simulated at different distances from the Cs-137 source using MC-NP4C Monte Carlo code. The results of the simulations were inserted to a MATLAB program in matrix form. The 2D interpolation of MATLAB was used to predict the behaviour of the dose in bone heterogenities with different thicknesses, and in different distances from the source. The results indicate that the 2D interpolation can predict the changes in dose in presence of thick inhomogenities. For thin inhomogenities a distance dependent correction factor for dose rate constant have been proposed.
The accurate results for bone inhomogenity shows that the new method can be generalised for other inhomogenities inside the phantom and other high energy brachytherapy sources. This method can be used in treatment planning systems for increasing the dosimetry accuracy. | 1,920.8 | 2015-05-30T00:00:00.000 | [
"Medicine",
"Physics"
] |
Theodramatic Rehearsal: Fighting Self-deception through the Dramatic Imagination
This paper seeks to appropriate the insights of dramatic theology for Christian psychology and soul care. According to Kevin Vanhoozer, Scripture is the 'script' for human beings' fitting participation in the acts and deeds of God in the world (i.e., 'theodrama'). Keeping with this dramatic paradigm, the author will explore what 'rehearsal' might entail by drawing from a branch of psychotherapy called 'psychodrama.' The main question to be addressed in this appropriation of dramatic theology is, " How might dramatic rehearsal combat self-deception? " The author will only begin to answer this question, but in the attempt it is hoped that further reflection and clarity will be induced.
become us."To the honorable Westley, there is something about lying that does not befit a man of action.Men of action do not resort to lying; it is unsportsmanlike.Rather, these men take action.
According to the Bible, human persons do not develop into full humanity unless they assert their will: "Be fruitful and multiply and fill the earth and subdue it and have dominion over the fish of the sea and over the birds of the heavens and over every living thing that moves on the earth" (Gen.1:28).If this command is foundational to full human life, then action is required to mature as a human.According to one Christian psychologist, human agency entails several qualities that are essential to development: rational-linguistic ability, a high degree of self awareness, the power to reason, responsibility, and imagination [2].Humans are not simply objects meant to be acted upon, but have their telos as actors who imagine possibilities, deliberate upon those possibilities, make decisions, act, and bear responsibility-all in full conscious awareness.
The Theodrama
The analogy of drama can be a helpful metaphor to understand the full flourishing of human beings as agents, or actors.One theologian who has greatly capitalized upon this analogy is Kevin Vanhoozer, who describes God's action in the world as 'theodrama' [3].The God of Scripture created the world, sustains it, and is redeeming it.From the beginning, he has actively directed every course of action in the universe according to his plan.His action was revealed preeminently through the history of Israel, yet from the origins of man God has revealed himself as the Creator and Lord of every person, including those outside Israel.Through the descendants of Abraham, God has extended his blessings to the whole world.In the Old Testament, this outreach was dominantly worked out in a centripetal way, drawing the Gentiles into the fold of Israel.After the coming of Christ, however, Father, Son, and Holy Spirit have turned the focus outward, so that the Word of God might ripple out in a centrifugal wave to all the nations.
In the Bible, the acts of God are conveyed in a narrative form.Scripture frames God's actions in Israel and the world as a story.Of course, narrative is one specific genre employed throughout the sixty-six books of the Bible.Large portions of the Old Testament are story forms (e.g., stories of Abraham, Jacob, and Joseph), and in the New Testament the four gospels provide biographical narratives of Jesus' life.According to a canonical perspective, however, the entire sweep of God's activity in the world may be seen as one grand story.His relationship with the nation of Israel, according to Klyne Snodgrass, is summarized in several of Jesus' parables: the wicked tenants (cf.Matt.21:33-46), the wedding banquet and the feast (cf.Matt.22:1-14), the barren fig tree (Luke 13:6-9), and the two sons (Matt.21:28-32) [4].
Moving from story to drama as a metaphor for God's activity might seem like a small shift, but there is a significant difference between them.Both mediums are about action.As Aristotle makes clear in his Poetics, narrative and drama are forms of mimesis, or the imitation of action through its organization as plot.The key difference is that while narrative tells us the action, drama shows it.At first, again, this difference may seem insignificant, unless one consents to viewing the disparate texts of the Bible as a unified canon.
Scripture does not take the form of drama, but as a written compilation of books, which include many generic forms.It would seem that the message of the Bible is limited to a form that can be told, but not shown.Or is it?While it is true that the Bible comes in a written form that can be read or heard, it is also the case that the Bible can be lived and seen.There is a sense in which Scripture is delivered in a textual medium, and yet there is also another sense in which it is delivered in a dramatic medium.What is the dramatic medium?
Scripture's authors all seem to assume that God has entered into the world and acted in a way that can be perceived by human beings.Vanhoozer uses the term 'theodrama' to capture this idea: "Theodrama is a matter of God's taking the initiative to make himself known to others, of God's parting the heavenly curtain in order to reveal and redeem" ( [5], p. 7).The theodrama is God's interaction with human beings through speech and deed, with the climatic turning point being the incarnation, life, death, resurrection, and ascension of Jesus Christ.Vanhoozer says, therefore, that the medium of the theodrama is "living persons in dialogical interaction and covenantal relation" ( [5], p. 7).The theodrama is recorded in the Biblical text, but that does not mean the theodrama has ceased to play.God still interacts with human beings, indwelling and empowering many to know him and participate with him in his action.As Vanhoozer would put it, human history is now somewhere between the climactic act (Christ's cross and resurrection) and the curtain call (the Last Judgment) of the theodrama.
Viewed in this way, the Bible is not only a record of previous acts in the theodrama, but also a script for its continued performance.Vanhoozer says, "The Bible is the authoritative account of the mighty acts of God in the past.As such, it is a dogmatic text that regulates Christian belief.But it also regulates Christian behavior, prescribing-which is to say, directing by writing-the way of Jesus Christ" ( [5], p. 8).The Bible is thus the 'script' for a human being's fitting performance in the theodrama.
Being a script would imply that Scripture has instrumental value as well as intrinsic value [6].Scripture is thus a script that directs actors in the theodrama.The Law given in the Torah, the promises and warnings of the prophets, the expressive lyrics in the Psalms, the assertions in Paul's letters, and the declarations given to individuals and groups throughout the Bible are all examples of understanding Scripture as the communication of various speech-acts.Drawing from speech-act theory, one might see that Scripture does things with its words besides relaying propositions, or assertions.One way the words (locutions) function (illocutions) is to command people to live in its way, or to perform according to its direction.Scripture's words can be understood to have value in themselves, but they might also have instrumental value by providing direction for fitting performance in the theodrama.
Theologians like Vanhoozer within the Judeo-Christian tradition have maintained that, despite Scripture's usefulness as God's script, human beings fail to perform as the 'script' directs.People do not show up as 'men of action,' as idealized by Westley in William Goldman's fantasy story.Rather, like Count Rugen, lies become them.Authentic action is negated by self-deception.Self-deception, it will be argued, impedes an individual's participation in God's drama by corrupting the mind, heart, and will through the imagination.On the other hand, self-deception can be attacked and overcome to some degree, and this confrontation happens in the same faculties-mind, heart, and will-through the imagination.In the following section, the author will briefly define self-deception and then explore its effects on the mind, heart, and will.
Self-Deception
Self-deception is a concept that has received ample consideration.It has been discussed across various disciplines, especially psychology, ethics, and theology.In psychology, self-deception is often discussed in terms of defense mechanisms, or defensive activity [7].Psychology has looked at the motivations for self-deception and clarified the means for hiding these motivations and desires through defensive structures.For example, one desires affection from others, but he has perceived from childhood experience that affection can be won or lost, depending on how he appears to significant others, such as his parents.If he does something wrong, he will not be loved.When, therefore, something threatens his sense of being loved or accepted, he defends against the negative feeling that comes from such a threat.How does he defend himself?By denying or ignoring negative experiences with defensive measures, such as hiding, avoidance, blame-shifting, or projection.Defensive activity can reveal a deep need one has to affirm his goodness and worth.One may turn to self-deception when he perceives his sense of worth is in jeopardy.
Ethics has pointed to self-deception as an absurd but nevertheless real experience.Self-deception is absurd because it involves double-mindedness in the person.On the one hand, a person knows that something is the case ("These Jewish people are being eradicated and I am standing by"), and yet, on the other hand, he pretends that this point is irrelevant or inconsequential ("I am doing nothing to harm anyone").Self-deception is lying to oneself in order to serve a particular interest.Though there is plenty of evidence towards one course of reasoning, it is supplanted by a contradictory false belief in order to protect a desire connected with the false belief [8].
Lastly, theologians have often pointed to the root of self-deception in the sinfulness of humanity.One employs self-deception to maintain a false belief about oneself and God.To do so one must disconnect from God (i.e., tell oneself God does not exist) or become God (i.e., tell oneself he or she is God).John C. Knapp says, "The self-deceiver rejects the God-relation and the truth that it may reveal.Self-deception often entails suppressing or evading knowledge of the truth in order to maintain false beliefs that support a desired self-image" ([9], p. 17).Again, self-deception arises out of a need to see oneself in a certain way.There is a deep desire for value, significance, and worth; because of sin, however, one's value is not received from God, but sought outside him.Theologians call this sinful desire for self-worth 'pride.'Knapp says: Theologically speaking, self-deception is of deeper significance than the mere acts of holding conflicting beliefs, avoiding unwanted information, or protecting self-esteem.To attempt to be what one is not, to willfully believe a falsehood about oneself or one's situation, is to attempt a deception before God who is the truth.Self-deception, therefore, is the handmaiden of pride ([9], p. 18).
At the root of self-deception, therefore, lies an inclination of the will that mutates one's desire for self-worth into the sin of pride.
Let us now consider how self-deception, motivated by pride, might affect the mind.Intellectual knowledge of the theodrama is required for fitting performance, according to Vanhoozer.Knowledge, however, can be distorted.An actor can memorize the script, but if he or she interprets it incorrectly, performance will be poor.In order to misconstrue the theodrama or one's place within it, self-deception hijacks the mind and uses reason in order to serve its sinister purposes.Of course, self-deception is not rational, but the self-deceived person employs a veneer of reason to justify his underlying motive [10].While one might assume that the solution for a self-deceived mind is to present it with better thinking (e.g., cognitive therapy), it is not primarily the soundness of one's reasoning that is the root issue, but the underlying motive or interest being protected.The content of one's ideas, whether it be sound or not, is only the surface issue-a shield that can shift from here to there, blocking rational attacks-that deflects one from its real function, which is to deceive oneself.The self-deceived mind does not even consider that it needs to be intellectually corrected; in fact, it heavily arms itself against any such notion!In the Bible, the archetypal character for this phenomenon is the Pharisee.For the typical Pharisee presented in the gospels, the truth is used to hide from truth.Merold Westphal demonstrates how the Pharisees and the rich young ruler used the content of their ideas to hide the function: "It was the function and not the impeccable content that separated them from the kingdom.Orthodoxy enabled the young man not to notice his idolatry and the Pharisees to be inattentive to their self-righteousness" ( [11], p. 14).Human intelligence is a powerful gift, but it would seem that the greatest mind can easily serve self-deception and thereby foil one's participation in God's purposes in the world.
The emotions would also seem to be required for fitting performance, but human affectivity can be distorted.Jonathan Edwards faced this problem during the revival movements of the Great Awakening, in which it was difficult to tell whether emotional fervor was a true mark of spiritual affection or of self-deception [12].If one's intellect can manipulate thoughts, even very true thoughts, in order to deceive oneself, a person can use his emotions in a similar way.Dietrich Von Hildebrand has illuminated the heart's complicity in self-deception by describing how powerful, deeply felt emotions can enable one to confuse enthusiasm about a virtue with its actual possession: "[…] the illusion consists in mistaking the intensity of the enthusiasm for a sign that one already possesses the virtue one is enthusiastic about.Because one lacks spiritual sobriety, one fails to distinguish two strata of personal reality, namely enthusiasm for an attitude or virtue and the real possession of that virtue" ([13], p. 52).An example of such self-deception appears in the incident between King David and Nathan (cf. 2 Sam.12).Nathan's story of the poor man and his lamb ignites David's heart to feel deeply, or enthusiastically, as the text says, "Then David's anger was greatly kindled against the man, and he said to Nathan, "As the LORD lives, the man who has done this deserves to die" (2 Sam.12:5).What cannot be denied in this incident is the validity of David's emotion.His anger was valid, for he rightly discerned the injustice done to the poor man.Anyone who does not get angry hearing such a story is probably in a worse position than David.As Von Hildebrand says, "The enthusiasm may be genuine and is as such a first step which may lead to real obedience.It is even the basis for the acquisition of this virtue" ([13], p. 52).If David had not gotten angry, there would have been no bridge to Nathan's real point: "You are the man!" Yet, the fact that David could get so angry-and rightly so in a sense-serves to show how David was self-deceived in his heart.He could feel an enthusiastic animosity towards injustice done by someone else, but what about the injustice he himself committed?If not for the spiritual sobriety of Nathan's indictment, David's fiery heart would have cloaked his own sin and subverted his fitting performance as King of God's people.
Like the mind and heart, the will's facility in the theodrama can also be negated through self-deception.Just as a self-deceived person makes use of the mind's ideas and the heart's affections, so he can manipulate the will's volitions.One might consider what we have said so far and reply, "Well then, it seems the thing to be done is to tell the truth.One must will himself to honesty!Authenticity!One must commit to being true in his thoughts and emotions."This thinking overlooks the fact that the will can also be employed in self-deception.Stephen Crites might offer this rejoinder to the suggestion that one can will himself to honesty: "But from the fact that self-deception is "in some sense" willed, it does not follow that you can therefore overcome it by a sheer teeth-gritted act of will" ( [14], p. 110).A person may have a fully operational will, and he may employ his will expeditiously to accomplish enormous feats, yet all to deceive himself from his will's deep-seated corruption and disobedience in the sight of God.Consider King Saul.Instead of obeying God's command to destroy all the spoils of the battle, he keeps the best.When confronted with his sin, he disowns his fault by offering sacrifices to God out of those spoils.He does not passively stand by; no, he acts.Yet, his act of will was a feigned obedience, a deceptive gesture, which led to his dismissal from an active role in God's theodrama.
From this discussion, one may make two inferences.First, self-deceptive behavior is characterized by some degree of unconsciousness.The Pharisees were unconscious of the way they used their orthodox minds to mask their heterodox hate for the Son of God.David was unconscious of the way he used his heartfelt anger to reinforce his self-righteousness and suppress his guilt and shame.Saul was unconscious of the way he willed to serve God in order to cover up his disobedience.Peter Sedgwick pithily summarizes our confounded situation: "Human action is not a series of responsible decisions taken by self-aware people in control of their lives" ( [15], p. 404).Self-deception disables self-awareness and responsibility, which are integral to mature human agency.One cannot perform fittingly in the theodrama if he is unconscious of his own self.Or, to state it positively, to rightly play one's part in the theodrama one must become more self-aware instead of self-deceived and unconscious.
Second, self-deception is not characterized by total unconsciousness, but by a 'willing suspension of disbelief' [11].This phrase was coined by Samuel Taylor Coleridge to describe the aesthetic experience of entering a world created in the imagination [16] 1 .J.R.R. Tolkien calls this a 'secondary world', distinguishing it from the primary world not in terms of real versus unreal-since what one imagines is by no means less real just because it happens inside the mind-but in terms of actuality versus perception [17].In other words, self-deception works inside a secondary world formed by the imagination that affects how the primary world is perceived.In a work of fantasy like Tolkien's The Lord of the Rings, in order for a reader to enter Middle-earth he must willingly suspend his conscious awareness of the separation between primary and secondary reality.In order to imagine the 'subcreated' (Tolkien's word) world where hobbits and elves exist, one must break his attention away from whatever else he could be thinking about and direct it wholly-with mind, heart, and will-to participate in the imaginative construction and apprehension of that world.Of course, Middle-earth is not one's primary world.In order to get there, one must suspend his disbelief of it.The point is, however, that when one arrives, Middle-earth takes on the attributes of reality in his perception; the towers of Minas Tirith are seen and the falls of the river Anduin are heard.One's mind, heart, and even will become captive to that secondary world formed by my imagination.As Tolkien argues in "On 1 By "imagination" is meant the faculty of forming mental images or concepts that are not actually present to the senses.While the imagination is a faculty of the mind, it has a special function that distinguishes it from other mental faculties: to form images that are not actually present to the senses.
Fairy Stories," the secondary world can affect how one perceives the primary world if one accepts what it offers to tell about reality (e.g., that evil is real and terrible, but good will ultimately triumph) [17].In a similar way, self-deception takes one captive to a distorted perception of reality by means of the imagination: false images of oneself and the world are formed in the mind because one decides to suspend belief in what is actually true.
The thesis of this paper is that in order to combat self-deception the imagination must be re-oriented to reality through a willing suspension of disbelief.Rather than believing in a distorted perception of reality, a person desiring to perform in the theodrama must allow his mind, heart, and will to be captured and persuaded by an imaginative re-construction and re-apprehension of himself and the world.For the remainder of our time, we will consider how one can suspend his (false) beliefs and entertain a new perspective that invites alternate beliefs through an appeal to his reason, emotions, and desires.First, we will observe how the imagination works with the mind, heart, and will to either refute or confirm one's beliefs about oneself and the world.Second, we will ask how we might integrate Scripture, or "the script", with a therapeutic tool known as psychodrama in order to reconstruct the secondary world of the imagination and transform one's experience-mentally, emotionally, and volitionally-of the primary world.This process we will call theodramatic rehearsal.It will be suggested as one possible method for scaffolding the construction of the secondary world in the imagination.Finally, we will draw some conclusions for further study.
As has been said, the mind, heart, and will are susceptible to the machinations of self-deception through one's use of the imagination.At the same time, if self-deception is understood as a form and manifestation of sin, it is therefore parasitical; like sin, the only way it can operate is by distorting something that is useful, good, and intended for human maturation: imagination.Thus, the imagination should not be equated with self-deception or sin; rather, it is the means for self-deception.The imagination is a useful instrument.What is its use, good or bad?
The instrumental use of the imagination is that it can either confirm and reinforce beliefs, or it can discredit them.A person may hold certain beliefs about the world, but he can suspend those beliefs, even if just for a moment, because his imagination allows him to entertain alternate beliefs.To a class of college freshmen, a clever, charismatic professor may winsomely and persuasively propose ideas about the world that invite a willing suspension of long-held beliefs or assumptions in the minds of his students.The imagination allows one to entertain a new perspective-contradictory to one's ownthrough a persuasive appeal.If this appeal is strong enough, then one's former beliefs may be disavowed and cast aside.On the other hand, the appeal may be too weak, and then the imagination continues to align with the old way of seeing.Imagination is like a projector that displays possible images, which either strengthen beliefs/assumptions or weaken them.While the imagination can be used to construct or build up a false belief it can also deconstruct it.The next question is how it does so.Looking back at the example of David and Nathan above (the one successful case), we can see the imagination working with the heart, mind and will; specifically we see how it allowed David to enter into a narrative willingly so that he was able to think, feel and act in a certain way.As Nathan told the story, David imagined it.David willingly participated with the narrative by allowing himself to "see" the characters and follow their actions with his mind's eye.One can be reasonably sure of this fact without having access to David's mind for two reasons.First, we ourselves can imagine the story as Nathan tells it, so we can infer that David could as well.Second, David evidences by his response that he has followed and understood the story: "Then David's anger was greatly kindled against the man, and he said to Nathan, "As the Lord lives, the man who has done this deserves to die, and he shall restore the lamb fourfold, because he did this thing, and because he had no pity" (2 Sam.12:6).David's imagination cooperated with the story in a way that allowed him to construe its events in a certain way, feel emotion, and act in response.The text focuses first on the emotion, telling us that David felt great anger toward the rich man.The text then says that David reacted by pronouncing a judgment: the rich man should die and restore the lamb fourfold.One could infer the reasons for David's judgment, but the text tells plainly that it was because the rich man "did this thing, and because he had no pity."David was thus affected in his heart, mind and will by virtue of his imagination, which allowed him to enter into Nathan's He believed willingly what Nathan related to the degree that he was moved to think, feel and act in a certain way, namely, in a way fitting to the story.He responded fittingly because he construed the story for what it was, an instance of injustice, i.e., "he did this thing," and cruelty, i.e., "he had no pity."He responded fittingly because he felt a fitting emotion in response to the injustice and cruelty: anger.And he responded fittingly because he pronounced judgment-David's role as king required him to act as judge over his people's affairs.There seems to be no hesitation on David's part to participate in the 'secondary world' projected in Nathan's story.
One might wonder why David's imagination cooperated as willingly as it did.First, Nathan was a respected voice to David.The reader is told that Nathan was a prophet of the Lord who had come to David before, and moreover David heeded Nathan's words as the very word of the Lord (2 Sam.7:3-29).Nathan bore a high degree of credibility with David that was likely a strong factor in his willingness to cooperate imaginatively.A second likely factor in David's willing participation is related to his deep concern for justice and the value he placed on kindness and compassion.His reason for condemning the rich man was based on the injustice he committed and the lack of pity he showed.The value David placed on justice and pity made him acutely attuned to the story; his values influenced his seeing.According to William Wood, Blaise Pascal saw a close link between what one values and the way one sees: [T]here is a direct relationship between the way we value beloved objects and the way we see them, which affects the beliefs we form about them.( [18], p. 378).David valued compassion, and so he was touched by the poor man's suffering at the hands of his wealthy neighbor.In other words, the story mattered to David because it dealt with concerns close to his heart.Yet a third factor in David's imaginative cooperation is the story's logic and simplicity, which allowed him to easily and quickly construe its meaning.The story takes up five sentences in an English translation; it has two main characters, and after briefly comparing the rich man to the poor man and describing the poor man's affection for his ewe lamb, the story delivers the main action in one sentence: "Now there came a traveler to the rich man, and he was unwilling to take one of his own flock or herd to prepare for the guest who had come to him, but he took the poor man's lamb and prepared it for the man who had come to him" (2 Sam.12:4).There is little chance that one hearing this clear, concise narrative could misconstrue it.Surely David tracked along with it as well as any listener could, for the way he construed it facilitated an emotional and volitional response.Robert Roberts defines emotions as 'concern-based construals,' meaning that emotions flow out of the way one construes his experiences in light of his core concerns [19].David's concern for justice and compassion worked in tandem with his construal of the story, so that he responded with anger towards the rich man, whom David willingly condemned to certain punishment.
Note that although Nathan had finished the story, David's imagination continued to work.David did not stop using his imagination when the story ended; rather he kept constructing the story so that he could reach a conclusion.Here we see that Nathan had not provided David with a self-enclosed narrative but an open-ended one, which invited a further response.And David did respond, with emotion and decision, while still using his imagination.David used his imagination to finish the story with a fitting ending: punishment for the rich man and payment for the poor man.The point here is that the imagination can do more than entertain a narrative; it can cooperate in the construction of the narrative.
Further, when a person cooperates in the making of an imaginative construct, that construct in turn has reciprocal effects on the will [14].David not only willingly entered into the story through his imagination, but his imagination cooperated in the story's construction so that he was moved to action, becoming himself one of the story's agents.Using Tolkien's terminology, one could say that David's willing participation in the 'secondary world' of the story led to effects in his 'primary world.'This operation accords with the function of the imagination described above: it can either construct and reinforce beliefs, or it can dismantle and discredit them.The imagination builds a secondary world in the mind that may be believed in, and if belief is achieved, willing action follows.Action demonstrates belief-belief that rests on an imaginative construct.
For David, his imaginative engagement in a story allowed him to temporarily disengage from his self-deception and lower his defenses even to the point that he unintentionally condemned himself.For a fleeting moment, a new imaginative construct released David from his old, false imaginative construct.This phenomenon demonstrated in David's life is not unique.For many people, the imagination is daily employed to either construct a false view of the self and the world or to construct a true view.For example, Crites describes how a person might imaginatively support the perception that he is a scholar: "If I am a scholar, I have books strewn on my desk and paper in my typewriter with a few lines typed on it, my pipe is lit, my lips are severely pursed, and an apologetic intruder could never guess how much of the afternoon I have spent pursuing sexual fantasies and muttering venom against my enemies" ([14], p. 121).Such an imaginative ability can veer a long way off the direction outlined in God's script, for it works to corrupt the heart, mind, and will.On the other hand, if the imagination was created with a good purpose, then it can be used to promote fitting performance in the theodrama.In David's case, one imaginative construct held him captive to self-deception, while another-instigated by Nathan-was able to release him.In other words, prior to his encounter with Nathan, David imagined himself to be innocent.Clearly this imagined construct was false.Yet by suspending his false construct and engaging in a new and different one told by Nathan, David was able to escape from his false self-image and receive the truth about himself: he was the guilty man who deserved punishment.
In David's life, as in the lives of many individuals, the imagination had an effect on his identity, or the way he saw himself.A person constructs the secondary world with himself as a player in it; he has a role in the drama of his imagination.One crafts a world that usually reinforces his identity.As seen with David, that identity can be either false or true.
In the construction of identity, the imagination works off of the suggestions of others outside oneself.Other people add bits and pieces to one's imaginative self-construct, like a mosaic.Harter describes the construction of one's self, or identity, as a process that involves the individual as well as significant others in his or her life [20].Parents erect a scaffold that supports a certain construct of a child's identity.On a larger scale, social narratives feed one's own auto-narrative, aiding the formation of identity [21].The individual person fashions his own self-image, but not without enlisting others in the project.In David's case, there may have been many people supporting his false self-image, such as servants and counselors.Nathan however, refused to support it, and his subversive influence was enough to break David's false self-image.
Because self-deception co-opts with the imagination, and, as we have just noted, the imagination is influenced by others outside oneself, the social dimension of self-deception must be acknowledged.One's self-deception is perpetuated (or hindered) by other people as well as by oneself.Beguiled and allured by one's ideal (and false) self-image, he seeks as many ways as possible to protect and nurture it, so as to further blind himself from the truth.One way to do so is through the support of others: "In order to deceive myself I must enlist the complicity of others, however passive or unwitting their complicity may be" ( [14], p. 124).If one can garner approval for his self-image from someone else, then that self-image is so much more secure.An idiosyncratic illusion is exchanged for a verified reality.In the case of a newlywed husband seeking his wife's approval, he may look for it in various forms, but perhaps none simpler than spoken praise.If he receives it, his self-image is bolstered, but if not, it is threatened.Depending on her response, her husband's imaginative self-construct might be met with approbation and support or with confrontation and resistance.With a word she can promote his false self, and with another she can call it into question.Therefore, just as self-deception is a social project, so is its exposure.
To fight self-deception, one must address his or her imagined identity as both an individual and social construct.In a moment, we will look at a social context (i.e., psychodrama) for the imagination's reorientation according to the theodrama.It is necessary to recognize that reorienting the imagination in order to reconfigure one's identity cannot be done solo.Identity begins to form in childhood, taking on the suggestions and scaffolding of others.If identity is in need of reformation, it will require outside help: "Because the imagination is socially constructed, reorienting the imagination requires something like a massive program of counter-habituation comparable to becoming a native member of a wholly new society" ( [18], p. 380).For Christians, that new society is the Body of Christ (Eph.4:12), or the New Self (Eph.2:15).The ultimate locus of a Christian's identity is found in God through Jesus Christ [22].It is thus as a member of Christ's body that a Christian must reconstruct his self-image [2].To fight self-deception, one must consider who he or she is as a particular member of a universal body.Or, to use the language of the theatre, self-deception must be addressed with an awareness of one's role within the entire performance of the theodrama.Drama turns out to be an apt means for self-insight, as contemporary scholars have already begun to explore [23].
Psychodrama
Psychodrama, as a recognized form of psychotherapy, was introduced to psychology in the early 20th century by J. L. Moreno.Moreno undergirded psychodrama with deep philosophical convictions about the nature of space, time, reality, and the cosmos [24].Psychodrama has been used by thousands of practitioners because of its philosophical depth [25].He believed that psychodrama tapped into the most significant aspects of man's existence, and that was the reason he used it: "A therapeutic method which does not concern itself with these enormous cosmic implications, with man's very destiny, is incomplete and inadequate" ( [24], p. 11).The human person, he believed, is a very significant being: "Man is a cosmic man, not only a social man or an individual man" ( [24], p. 10).As a form of therapy, psychodrama taps into the deepest part of one's soul: "Psychodrama is, as the word implies, a dramatizing of psyche, a kind of soul theatre" ( [26], p. 562).The individual person is given a stage for her life, in order that she may see it in a clearer way, and that she may learn to abandon 'old scripts' and rehearse new and more authentic scripts for life [27].The overlap between the individual focus of psychodrama and the universal focus of the theodrama is clear.Psychodrama focuses on one actor's role in the theodrama by making it explicit to one's attention.This process is facilitated by five essential components: a stage, a protagonist, auxillary egos, a group, and a director.
First, a stage is used.The stage may be as complex as the large three-tiered version with balcony and lights used by Moreno, but any place will suit as long as it can fit the number of people involved.The stage, as a term in theatre, is where the drama is enacted, the environment or world in which the actors perform their role.The stage is a space that enables a person to "be and to act himself in an environment which is modeled after that in which he lives" ( [24], pp.6-7).In psychodrama, the stage is where a person's environment is imaginatively reconstructed and where a vignette of his or her life's drama is performed.For example, a person may imagine the stage as his office.If available props are at hand, the stage may be furnished to resemble the office with a chair, desk, etc.With the stage set, the person then plays out a scene from his life in that environment.
Second, the protagonist is the person receiving the primary focus in the psychodrama.The protagonist is chosen from among the group members as a representative.This person is usually chosen because his present issue or problem is one with which other group members identify or one which they share in common.The environment of the stage is the protagonist's environment, and the action performed concerns the protagonist.The protagonist will enact situations from his past and present, and he may also enact situations in view of the future.
Third, auxiliary egos are other members of the group who join the action centered around the protagonist.They may serve as other people in the protagonist's situation (spouse, boss, father) or as parts of the protagonist's self (little boy).The protagonist chooses group members to play these parts; they are given the needed information to play their role, and then they join the action.
Fourth, the group itself is a basic instrument of psychodrama.Although psychodrama may be performed with just a protagonist and one other (director), it is usually done as a kind of group therapy.I will assume that the group aspect is essential to psychodrama, since the importance of finding one's identity within a corporate body has been emphasized.The group chooses the protagonist and assists by becoming auxiliary egos.At the end of a session, the group offers support to the protagonist and shares perspectives about the experience.
Fifth, the director guides the entire process.The director assists in selecting the protagonist and auxiliary egos.He guides the protagonist in setting up the scene and arranging objects or characters on the stage.The director leads the protagonist through the action with directives about what to do, observations about the scene, and questions that lead the protagonist onward in the drama.The director begins the process and guides it to the end.His main task is to facilitate the protagonist's action into an opportunity for change and growth.
These five components allow psychodrama to give a person an opportunity for change and to be liberated out of a lurch in life by rehearsing new possible ways of living.Psychodrama provides a person a space to 'rehearse life.'Before ever coming to psychodrama, people already act out rehearsed ways of living according to the scripts they have accepted or fashioned for themselves; psychodrama offers a person a 'new stage' with a different audience [27].For example, a father may have taken on certain scripts that inform the way he acts towards his children at home.One of the scenes might proceed like this: Dad walks into the living room and sees his three-year-old daughter jumping off the couch unto a pile of pillows.He furrows his brow and says abruptly, "Be careful!You could hurt yourself." Daughter continues to play, but goes to another room with a toy.Dad mutters, "What a mess this place is.You'd think it was a jungle gym instead of a living room!"In a session of psychodrama, the director might help the father reenact this scene with his daughter by asking him to reverse the roles: "If you were your daughter, what would you be receiving from Dad's words and behavior in this scene?"Reenacting the scene in the role of his daughter, it might then dawn on him that, if he were her, he would feel surprised and maybe even frightened by Dad's sudden outburst.He might also internalize a line Dad often says-"Be careful!"-and take on that anxiety as his own.Role reversal is a powerful tool in psychodrama, because it places the protagonist in another person's shoes, providing a new vantage point from which to behold his actual ways of living [28].Psychodrama can do more, however, by empowering one to break out of what is actual into what is possible.Having been made aware of the script one plays out before his daughter and its effects upon her, the father has the vision and motivation to discard this script and start rehearsing a new one.At this point, psychodrama offers the protagonist "resources never before dreamed of, new capacities for understanding of himself and others, new bravery and new warmth.He is helped to be aware where once his eyes were shut, to speak where once his tongue was stilled, to live where once he had only existed" ( [29], p. 235).Enter the theodrama.
Actually, the theodrama has been here all along because it encompasses the swath of God's drama in the world, and that includes the role that each particular person's life plays within it.Psychodrama explicates an individual's role as an actor in the universal theodrama.And, bringing this relationship to bear on self-deception, psychodrama reveals the latent forms of mendacity that hinder one from fitting performance in the theodrama.Psychodrama can prove to be a powerful aid in fulfilling one's theodramatic role.
Psychodrama enables one to become conscious of self-deception by reenacting and rehearsing everyday actions.Rehearsal is a necessary part of renewing one's theodramatic imagination and overcoming self-deception: To become explicitly conscious of one's situation [...] demands that one rehearse what one is doing.We seldom feel it necessary to spell out our engagements in any detail, however [...]There are many things we do every day-dressing, eating, playing with our children or talking with our spouses-that can be carried on without bothering to delineate how they may contribute to an overall life-plan.We seldom "spell out" what we are doing unless we are prodded to do so ( [29], p. 102).
Psychodrama prods.It brings one face-to-face with the way one lives, revealing the "small story" of our day-to-day existence [30].
A person's small story, if under the auspices of self-deception, is antithetical to fitting performance in the theodrama.To get out of the small, distorted story one must locate himself in the larger, true story of God.That story is the theodrama-specifically, it is the Gospel of Jesus Christ, who entered the stage of the world as a human actor and performed God's script in a way that frees one to take up the role he had lost because of sin.The larger drama with which psychodrama begs one to identify is the Christodrama.Christ's drama-his life, death, resurrection, and ascension-is the drama a Christian must fit his life within in order to be rid of his old false self and take on a new self: Christians claim to find the skill to confess the evil that we do in the history of Jesus Christ.It is a history of suffering and death that must be made our own if we are to mine its significance.The saints formed by this story testify to its efficacy in purging the self of all deception as it forces the acceptance of a new self mirrored in the cross" ( [30], p. 114).
Just as a self-deceived imagination hinders one from living out his role authentically, the imagination that is oriented to Jesus Christ can free one to perform like him.
Let us make the points of the last several paragraphs very explicit.First, psychodrama serves to uncover one's actual behavior.Second, psychodrama can also reveal possibilities for change.Third, when the drama of one's life (psychodrama) is oriented to the drama of Christ (theodrama), one can rehearse new ways for living that break out of one's old identity (small story, old script, etc.) and that correspond to who one is in Christ.From these points, two conclusions follow about the relationship between psychodrama and theodrama.
Integrating Psychodrama into the Theodrama
From a Christian psychology perspective, theodrama requires psychodrama.If a person wants to perform in the theodrama, then some form of psychodrama must be used, even if it is simply 'performed' in the context of two friends talking openly about their lives.Psychodrama is rehearsal, and one must rehearse if he is to break the spells of self-deception that block his participation in the theodrama.Theodrama cannot be approached merely as a cognitive, affective, or volitional exercise.For a Christian, it is not enough to think about the Gospel, nor to feel emotions about it, nor to will certain actions in response to it, for all these can be forms of self-deception.Theodrama, if not integrated into oneself, is just an impotent idea.One may study it and articulate it as doctrine, yet never apply its meaning to life.One can despise injustice in the world, yet all the while employing his emotions to mask his own lack of justice.One can even take action by tending to the 'things of God' or ministering to the needs of others, while neglecting to do what God says is most needful for him.If an individualis to derive any benefit from the theodrama, then he must apply it to himself.Psychodrama applies the universality of the theodrama to the particularity of the individual.I am not saying that psychodrama-as method of psychotherapy-is the only instrument for appropriating theodrama.There are other methods of rehearsal that the imagination may use.What is most essential to the appropriation of theodrama is some form of imaginative reconstruction (i.e., rehearsal).In that sense, psychodrama is necessary for an individual's performance of the theodrama.
Second, from a Christian psychology perspective, psychodrama requires theodrama.Psychodrama without theodrama is a play without the right script.Some advocates of psychodrama miss this point.They believe that once psychodrama has exposed a person's false scripts, the next step is to start writing a new one, or even to be done with scripts altogether [29].Psychodrama, some maintain, does not try to give one anything other than what the person can imagine: "For in a very real sense psychodrama does not give to an individual anything that is foreign to himself.It only helps him to discover within himself new resources never before dreamed of" ([29], p. 235).The individual, however, must look outside the self, transcending his own limited vision of reality.In order to experience reality one needs to expose his narrow and distorted view of the world: "To uncover our deceptions and to recognize the lie in sin which leads to pride and moral obtuseness will not of itself deliver us.And yet, that is the first step.St. Paul would not have known Christ as Saviour had he not been confronted by Him as Judge" ( [31], p. 77).To make the leap from falsehood to truth, however, we need to transcend ourselves [32].For example, Stephen Weisz, a therapist working with psychotic patients, was given the task of performing a nativity play for his mental health facility [33].He discovered that the patients who performed in the play experienced a temporary escape from their psychoses by taking on exterior roles that accorded with a different script.Psychodrama provides the first step to a larger vision, outside oneself.Of course, it is counter-productive to simply accept some other script, either from society or of one's making, which only serves to continue one's self-deception.Ironically, psychodrama and every other therapy that aims to combat self-deception is liable to replace one form of self-deception for another: "Each [form of therapy] will cure you of self-deception, open your eyes-if you will step into his circle.Cautious skeptics, however, may worry that this therapeutic circle with entangle them in some new devices with which to deceive themselves" ([14], p. 112).What is needed is the right script, the script that corresponds with reality-the script of the theodrama.
The theodrama as script orients the protagonist's action in a psychodrama around the action of Christ.By virtue of Christ's life as a human being, the protagonist can claim union with Christ and appropriate all that belongs to Christ's life to himself.Thus, when a father sees the part of himself that shoves anxiety unto his daughter, he can consider it crucified with Christ.And when he looks for the strength to continually 'put off' that aspect of his self and replace it with new ways of living, he can find that strength in Christ's resurrection and ascension.Donning the new self in Christ, he can rehearse new ways of living towards his daughter and others in his life.The action of Christ can be realized in one's own experience.
When psychodrama and theodrama are integrated, the individual is given the opportunity to encounter God, and thereby to begin to reconstruct his imagination and perform a new role in life.It is through a meeting with the God of the theodrama that self-deception is traded for self-realization.In Leo Tolstoy's "The Death of Ivan Illyich", Ivan is a man who is self-deceived, having taken on the script offered by his upper-class society.He is incapable of truly loving others in his life because he is so absorbed in maintaining the false self he has adopted.The moment of truth in Ivan's life comes when he not only realizes his self-deception but transcends it through an encounter with God: Ivan's break-through is neither simply a psychological illumination of self-understanding nor only a moral conversion toward truthfulness and love.It is also an epiphany, a moment in which he feels God's understanding both in the cognitive sense, in which Ivan knows that God comprehends him in his depths, and, more importantly, in the affective sense, in which Ivan feels that God accepts, loves, and forgives him ( [34], p. 126).
Christianity maintains that God encounters people through his Son, Jesus Christ.When psychodrama is integrated with theodrama, those involved should make it their aim to come face-to-face with Jesus.
Conclusions
To conclude, let us consider how combating self-deception with psychodrama must be counterbalanced by contemplation.Psychodrama has many great advantages; one that we have barely touched is its group aspect, which can be seen as a manifestation of Christ's body building itself up in love (Eph. 4:16).Notwithstanding the necessity of this corporate dynamic, the internalization of one's identity in the theodrama must also happen alone with God in silence and contemplation.The spiritual practice of contemplation is rooted in the acknowledgement that God alone can give one his identity [22].One does not receive his identity wholly from others, although they can support its construction.Christian psychology assumes that one cannot achieve full self-knowledge apart from God. Kierkegaard believed that self-presence is a task, not an achievement, because self-presence depends on God and not oneself: "In its self-relation the self is not posited as the ground of certainty, the criterion of truth, the self-sufficient and absolute mode of being, in short, the center" ( [32], p. 171).God alone can give a person his identity; God is the ultimate director of one's psychodrama.A human psychodramatist, counselor, or friend may only suggest creative possibilities for self-realization.If they go beyond suggestions to controlling, they may seduce another into a false script of their own making that leads to yet another false self.One Christian writer has said that people need God to break their images of themselves, for God alone is the great iconoclast-the one who destroys false images [35].Contemplation responds to God's summons to rehearse life coram deo, in the presence of God.Prayer is a kind of rehearsal, for it offers up oneself to God, who mysteriously unites one to Christ through the Spirit, by whom Christians cry out to the Father as sons.Teachers in the Christian tradition have long held that the passive activity of prayer is more important than anything else one can think, feel, or do in the fight against self-deception, for in prayer one calls "upon the one who knows our hearts better than we do" ( [7], p. 188).
To bring this discussion forward into greater fruitfulness, I suggest we take up the subject of self-deception into other modes of rehearsal.Theodramatic rehearsal is one weapon we can utilize against the lure of the lying self, but there are others.Men of action will claim every one. | 12,062.4 | 2014-03-03T00:00:00.000 | [
"Philosophy",
"Psychology"
] |
Neutrophil Extracellular Traps in Fatal COVID-19-Associated Lung Injury
An excess formation of neutrophil extracellular traps (NETs), previously shown to be strongly associated with cytokine storm and acute respiratory distress syndrome (ARDS) with prevalent endothelial dysfunction and thrombosis, has been postulated to be a central factor influencing the pathophysiology and clinical presentation of severe COVID-19. A growing number of serological and morphological evidence has added to this assumption, also in regard to potential treatment options. In this study, we used immunohistochemistry and histochemistry to trace NETs and their molecular markers in autopsy lung tissue from seven COVID-19 patients. Quantification of key immunomorphological features enabled comparison with non-COVID-19 diffuse alveolar damage. Our results strengthen and extend recent findings, confirming that NETs are abundantly present in seriously damaged COVID-19 lung tissue, especially in association with microthrombi of the alveolar capillaries. In addition, we provide evidence that low-density neutrophils (LDNs), which are especially prone to NETosis, contribute substantially to COVID-19-associated lung damage in general and vascular blockages in particular.
Introduction
Since the first published reports on clinical features of COVID-19 (e.g., [1]), there is cumulative evidence that critical cases could be substantially aggravated by the tissue-damaging effects of neutrophil extracellular DNA traps (NETs) [2]. Likely associated with a cytokine storm [3][4][5], critically ill COVID-19 patients were shown to develop conditions that had all been previously identified as closely associated with NETosis (a topic introduced by Barnes et al. [6] and Mozzini and Girelli [7]) such as severe tissue injury, coagulopathy, and barrier dysfunction of the lungs [8]. A copious release of proinflammatory peptides causing a cytokine storm, as seen in COVID-19 [9], has long been regarded as a potent inductor of NETosis (e.g., [10]).
ARDS of various etiologies had previously been shown to be accompanied with elevated serum levels of D-dimers [36], yet considerably less pronounced than in COVID-19, and cell-free DNA [35], both shown to be strongly associated with NETosis (e.g., [37]). A potential link between systemic NETosis and endothelial dysfunction leading to microvascular coagulation in COVID-19 is therefore highly probable. In line with this, COVID-19 sera were found to contain abundant NETosis markers such as cell-free DNA, myeloperoxidase-(MPO-) associated DNA, and citrullinated histone H3 (citH3), together with elevated levels of the acute-phase C-reactive protein (CRP) and D-dimer [38,39]. Initial immunohistochemical data further substantiated a role of NETosis in COVID-19-associated lung injury and thromboembolic complications [40][41][42]. Based on this evidence, the potential influential role of NETs and their by-products in COVID-19 pathogenesis and outcome is now rapidly gaining acceptance and has also been considered in treatment approaches in COVID-19 such as anti-IL6 and IL26 therapy (e.g., [5,[43][44][45]).
This study intends to strengthen and expand upon the above findings. We used immunolabeling together with histochemical staining to trace the NET-forming cells, NETs, and their molecular markers in autopsy lung tissue from seven COVID-19 patients. Stereological point counting was employed to quantify citH3+ NETs and citH3+ neutrophils in these tissues, also to enable comparison with NETosis levels in specimens of non-COVID-19 bacterial pneumonia and DAD, and in healthy control lungs.
Materials and Methods
All autopsies were performed at the Institute of Pathology, University Hospital of Basel, Switzerland, according to previously delineated safety protocols [18]. COVID-19 lungs were tracheally perfused with 4% phosphate-buffered formalin (72 hrs. at room temperature) and cut into 0.5-1 cm parasagittal slices. Samples for histological analysis were extracted from the periphery and centers of each lobe and subsequently dehydrated and embedded in paraffin. The present investigation was undertaken on a tissue macroarray of seven samples selected from superior lobes of well-characterized COVID-19 patients (patient nos. 1 to 7 of the cohort of Menter et al. [18], 2 females and 5 males, age 68-96 years; Table 1). The tissue macroarray was constructed by dissecting equilateral triangular fragments of 0.5 cm (i.e., 0.108 cm 2 ) from paraffin-embedded lung tissue containing typical sequelae of COVID-19 and transferring these fragments to a recipient block, in a manner analogous to that described by Battifora [46]. Tissue microarray cores with 1 mm diameter from age-matched patients, who died of pneumonia and ARDS due to other causes than COVID-19, were used for quantitative evaluation of citH3+ NETs and neutrophils. These reference specimens included 5 cases of bacterial (Streptococcus pneumoniae) pneumonia [47] and 5 cases of non-COVID-19-related DAD [48], as well as 5 reference control lungs without pathology.
Immunohistochemistry was performed for the neutrophilic enzyme MPO, the chromatin decondensation marker citH3, the carcinoembryonic antigen-related cell adhesion molecule CD66b, and the glycan determinant CD15/Lewis x, a distinguishing marker of human myeloid cells, the latter two reported as highly expressed in low-density neutrophils (LDNs) [49]. The following primary antibodies were used: Ultra autostainer (with MPO and CD15) and AP-conjugated goat anti-rabbit IgG (ab97048, Abcam, UK; with citH3 and CD66b) were used for secondary visualization. Histochemical Prussian-blue staining (Perls' stain) was used to identify macrophages with endogenous iron (hemosiderin) deposits. Hematoxylin (with MPO and CD15), Feulgen-Rossenbeck reaction (with citH3 and CD66b), and nuclear fast red (with Prussian blue) were used as nuclear/DNA counterstains.
Quantification of citH3+ NETs and citH3+ cells with still intact nuclei (the latter being assumed to be neutrophils induced to NETosis) was performed by stereological point counting using ImageJ software. Random nonoverlapping microscopic fields (final size 565 × 433 μm = 244:65 mm 2 ) Disease Markers were taken from the citH3-Feulgen double-stained macroarray specimens (5 fields per specimen) and from the similarly stained reference specimens (see above). Microscopic fields were digitally overlaid with a regular 10 × 10 μm square array test system; array crosspoints falling on the target structures were counted and used to calculate area fractions. A nonparametric Kruskal-Wallis test was used to evaluate intergroup differences.
2.1. Ethical Approval. The work was conducted in cooperation of both institutions and was approved by the Ethics Committee of Northwestern and Central Switzerland (Ethikkommission Nordwest-und Zentralschweiz), Project-ID 2020-00969, decision of May 19 th , 2020 (formal letter in German).
Results
Immunohistochemical analyses for all NETosis markers produced variable results between tissue samples of different patients and demonstrated that signals were not restricted to the producer cells but also present at sites of secondary dissemination. Immunostaining for MPO demonstrated an extensive neutrophil infiltration mainly in the interalveolar septal space. With some variation in incidence between samples, scattered MPO+ cells were detected on alveolar surfaces and alveolar lumens and clumped and/or arranged along the inner lining of the capillary endothelium and other small vessels (Figure 1(a)). Large numbers of MPO+ cellular aggregates were found in and associated with areas of proliferative diffuse alveolar damage (DAD) in particular (Figures 1(b) and 1(k)). Ample deposits of hyaline membrane (asterisks in Figure 1(c)) were usually less densely populated. In some areas, MPO+ cells appeared spread out, with large oval cell bodies and speckled and/or reticulate inclusions partly costaining for DNA (arrow in Figure 1(d), detail in Figure 1(e)). In focal and/or patchy patterns, MPO-stained particles were also observable at extracellular sites. MPO signals were found at various sites in the parenchyma in fine granular deposits or fibrous meshworks, frequently associated with alveolar surfaces and inner linings of alveolar septa and capillaries, displaying an eroded appearance (Figures 1(f) and 1(i)), and cemented between MPO+ cells in microthrombi (Figures 1(g)-1(k)). From previous investigations [50,51], we postulate that these formations are the histomorphological correlate of NETs. Similar to MPO+ cells, deposits in hyaline membranes were hardly interspersed with MPO+ fibrous matter.
Staining patterns for citH3 were broadly in accordance with those for MPO (and also with those described for e)). In agreement with the MPO and CD66b staining patterns, citH3 was frequently also found in fine granular and/or fibrous extracellu-lar structures and often associated with the DNA stain, thus corroborating our assumption that these structures represent NETs (asterisks in Figures 2(f)-2(h)). As with MPO, NETosis demonstrated a patch-like distribution, additionally found inside of and interspersed with the constituents of thrombi (asterisk in Figure 2(e)). A particularly high presence of intra-and extracellular citH3 was similarly seen in proliferative DAD (Figure 2(h)).
Results of the quantification of citH3+ NETs and neutrophils indicated a clearly higher prevalence of both target variables in COVID-19 lungs as compared to Pneumococcus pneumonia and non-COVID-19 DAD (p4, lines 139-144, and Figures 2(i) and 2(j)). Differences are statistically significant between COVID-19 and bacterial pneumonia for citH3 + NETs, between COVID-19 and non-COVID-19 DAD for citH3+ cells with still intact nuclei, and between COVID-19 and control lungs devoid of pathology for both target variables (p < 0:05 each).
Six of the seven samples additionally contained relevant numbers of cells of different sizes that stained with Prussian blue (Perls' stain). Large Prussian blue-stained cells mainly formed loose groups or clusters in alveoli (Figures 4(a)-4(c)), but single cells also were detected in interalveolar spaces and adjacent to blood vessels (Figure 4(d)). Prussian blue-stained granules were also observed in roundish patches that bore resemblance to outspread neutrophils ( (Figures 4(h) and 4(i)). Such granules were also found intermingled with extracellular substances, probably including areas of released NETs (Figure 4(g)).
Discussion
In accordance with previous analyses of COVID-19 lungs (e.g., [40]), immunohistochemistry revealed neutrophilic 3(m)). The relevance of the latter might not only be seen in a detrimental inflammatory context but also in the light of findings in ARDS of other etiologies, such as acid-induced lung injury, indicating that neutrophils promote alveolar epithelial regeneration via enhancement of type II pneumocyte proliferation [52]. Current evidence suggests that the negative effects of NETs will far outweigh any possible benefit. Indeed, immunostaining for MPO, citH3, CD66b, and CD15 unambiguously demonstrates an abundant presence of NETs and NET-generating neutrophils at sites of alveolar damage (e.g., Figure 1 Fig. S1 in Supplementary materials). A state of detrimentally enhanced NETosis seems to be also mirrored in the serological data before death (Table 1), particularly by the high levels of CRP and lactate dehydrogenase (LDH) (role in NETosis addressed by [53]), and strong neutrophilia in four of the seven patients.
Together, this clearly substantiates our initial hypothesis that COVID-19-associated DAD and DAD in general [23,54] and for bacterial pneumonia [32,55]. However, the present morphological findings together with the results of the comparative quantitative analysis of citH3 (Figures 2(i) and 2(j)) point toward potentially more drastic adverse effects of NETosis in COVID-19. Implications of the present results for COVID-19 thromboinflammatory pathology seem to be multifaceted. Our current observations provide important further support of previous findings in COVID-19 patients revealing DAD with particular signs of vascular dysfunction in lungs and other organs [13,[16][17][18]. Validation that the microvascular thromboses seen are not simply cadaveric clots (cruor sanguinis) but true microthrombi comes from the finding that these formations contain well-formed, intensively immunopositive fibrin casts of 5-20 μm in dimension [18] and from broad clinical evidence of thrombotic microangiopathy even in nonlethal COVID-19 [56]. Although microthrombosis is not an exclusive feature of COVID-19 but a potential complication of DAD in general, results from our research indicate a much higher prevalence of such microthrombi in COVID-19. A previous study on the same cohort found a ninefold increase of alveolar capillary microthrombi per standard area of injured lung tissue in the same 7 COVID-19 patients compared to influenza patients [17]. Together with several lines of evidence from the literature (e.g., summarized by [12]), this supports the conclusion that COVID-19 is Figure 3: Immunostaining for the low-density neutrophilic (LDN) markers CD66b (dark blue (a-g)) and CD15 (brown (h-n)), DNA/nuclei counterstained with Feulgen method (purple (a-g)) and hematoxylin (blue (h-n)). Disease Markers associated with a more specific form of DAD characterized by extreme hypercoagulability.
Disease Markers
Our current study provides evidence that some microthrombi in COVID-19 consist almost entirely of citH3+, CD66b+, and CD15+ cells (Figures 2(d) and 2(e) and Figures 3(e), 3(f), 3(k), and 3(l)) and points to an instrumental role of strongly NETosis-prone LDNs in COVID-19 vascular clotting. The finding confirms and further specifies related results of Jiménez-Alcázar et al. [57], Middleton et al. [41], and Leppkes et al. [40]. Evidence from cancer research suggests that LDNs represent a mixture of immature and mature variants in proportions contextually varying with diseases [58,59]. It has been proposed that immature LDNs are in fact a class of myeloid-derived suppressor cells able to interfere with T cell-mediated immune responses, while mature LDNs are converted from normal density neutrophils (NDNs) by a mechanism depending on host and/or pathogen-derived factors, thereby similarly acquiring immunosuppressive qualities [58,60]. As shown in tuberculosis, LDN-derived NETs may induce a vicious cycle by means of increased NDN-to-LDN conversion [61]. Recent synoptic analysis suggests a considerable pathophysiological role of human LDNs in a plethora of conditions, such as acute and chronic infections, inflammation, cancer, and pregnancy [62]. The role of LDN-derived NETs in COVID-19 thus warrants further investigation.
Large Prussian blue-stained cells (Figures 4(a)-4(c) and 4(e)) are in all probability hemosiderin-laden macrophages. Their presence is consistent with findings from analysis of bronchoalveolar lavages (BAL) from COVID-19 patients [63]. Together with the presence of hemosiderin granules in presumably NET-forming neutrophils (Figures 4(e)-4(g)) and other mobile cells (Figure 4(i)), and at various extracellular sites, this may prompt considerations about the scope of SARS-CoV-2-induced damage on erythrocyte hemoglobin. Indeed, alveolar macrophages (and likely other phagocytes) collecting free iron ions derive from iron displacement from the heme porphyrin [63][64][65]. As a caveat, clustered large Prussian blue+ cells bear remarkable similarity to giant (pseudo-)syncytia also occurring in COVID-19 lungs. These were recently classified as of pneumocyte origin due to their staining for surfactant-A, thyroid transcription factor 1, and napsin A [66]. Relations with iron ion uptake remain to be determined.
Note on Tissue
Sampling. Careful inspection of the chosen COVID-19 tissue samples together with the use of a macroarray of comparably large fragments as opposed to conventional tissue microarray cores (usually 1 mm in diameter, i.e., 0.785 mm 2 ) as well as punching control tissues in triplicates served to prevent confounding of the obtained results due to tissue microheterogeneity.
Conclusion
Overall, our immunomorphological findings confirm and considerably expand upon the recent histopathological findings on NETosis in COVID-19 lungs. NETs are abundantly 7 Disease Markers present in respective seriously damaged respiratory tissue and thrombotic vascular occlusions, unbiased by formalin autofluorescence. Our results also clearly support a relevant contribution of LDNs to COVID-19 pathophysiology, with an emphasis on vascular blockage and microthrombus formation. Further elucidation of the underlying mechanisms allows for a more diversified understanding of neutrophil heterogeneity and effects of COVID-19 immune dysregulation.
Data Availability
The authors confirm that the data supporting the findings of this study are available within the article and its supplementary materials.
Supplementary Materials
As a supplement, we provide a panel of three as yet unpublished images (Figs. S1A-C) from scanning and transmission electron microscopy (SEM and TEM, respectively) taken in the course of our previous research employing in vitro induction of NETs [51]. The images document that human neutrophils releasing NETs after stimulation with phorbol myristate acetate (PMA) bear a close morphological resemblance to the large oval-shaped cells filled with reticulate matter staining for NETosis-related marker proteins and DNA that we show in Figures 1(d) and 1(e) and Figures 3(g) and 3(n). The SEM images (Figs. S1A,B) show an irregularly rounded patch of NETs encircling the still compact remains of its cell-oforigin (S1A), and the remains of a neutrophil in a further advanced stage of NETosis, with thin NET strands projecting radially outward and being bundled into a tail-like extension (S1B). The TEM image (Fig. S1C) was taken from an 80 nm horizontal section through a cell similar to that in Fig. S1A. The cell shows a largely intact outer membrane and a still recognisable nucleus with a partly preserved nuclear membrane. Decondensed chromatin is present in the cytoplasm. Threads of NETs decorated with vesicles attach to and project from the cell's surface. (Supplementary Materials) | 3,654 | 2021-07-30T00:00:00.000 | [
"Biology",
"Medicine"
] |